Hello #R or #Postgresql specialists. I have a few hundred views in a database schema, that I need to transform in a pdf (with ajusted size of the table since the number of rows and columns is highly variable). #knitR seems to be the relevant tool but I do not really know how to use it efficiently, I'm taking any advice !
Absolutely devastating news. I would not have accomplished what I have accomplished in the last decade without Yihui's work on #knitr. If you've ever encountered a website, report, or book built with #RStats in recent memory, you have Yihui to thank.
Phew, had a really productive but exhausting #Rstats day today. It's a report that works with #quarto and #knitr and I created something like a "create_graph()" function, because the graphs a very similar and it saves a lot of copy paste.
I really want to make one thing clear: Without #Rstudio and #ggplot and #dplyr and all things #R I could not do my job. Neither Excel, nor Stata, nor SPSS could help in that specific way. I wouldn't get anything of the non-data tasks done...
This is how I usually set up the beginning of my #knitr / #quarto scripts to download data from #osf that I then use for analysis in the rest of the script.
This way I only need to share the script, and anybody who wants to #reproduce the results will always get the right datafile.
Let me know if there are any easier or better way to do this!
I have a #Quarto question: Is there any main advantage over using plain #pandoc if you don't use #knitr style code execution? Like for regular academic writing without dynamic computation?