In the latest movie adaption of the Surf Dracula (2024) by DC they already introduced the less grounded and more magical elements of the Surf Dracula franchise and even allready introduced the Surfer Werewolves to sell more toys and build up an expanded universe. They also did not even use Van Chil’sing as his archenemy but made him fight a gigantic sky hole that spawned an army of faceless minions.
So perplexity can kind of weakly analyze the first few pages of small file size pdfs one at a time, but I’d love to have something that would allow me to upload several hundred research papers and textbooks that could then be analyzed for consensus and contradictions and give me more meaningful search results and summaries...
I have used this small R package that allows you to read the text content of a PDF and send it to a local llama model via ollama or one of the large LLM APIs. I could use that to get structured answers in JSON format on a whole folder of papers, but the context length of a typical model is only long enough to hold a single (roughly 40-page) paper in the memory. So I had to get separate structurer answers on each paper and then generate a complete summary from those. Unfortunately that is not user-friendly yet.
Transcription4 panels, arranged 2x2. The first panel contains a picture of Jean-Luc Picard holding up a frame containing a number of medals. It has the text “These are your Starfleet merit badges, are they not, Mr. Data?” The second panel has a picture of Data, with the text “Yes, Captain.” The third is a close image of...
I know Python, R, the STATA ado-language (a horrible proprietary progamming language), MATLABs language, Javascript and some minimal C++. What I know really well though is R and Python. So typical profile for a (data) scientist.
For me it’s gotta be something from ARTE (the French/German culture television channel). Either it’s the one about Chodorowskis weird Dune project or the three-part series about the history of racism. Both were extremely well-made documentaries.
Magneto is definitely worse at branding than Charles X. Xavier (lemmy.world)
Surf Dracula® (lemmy.world)
Bee is stored in the walls (lemmy.world)
And usually it's just right (lemmy.world)
Going viral (mander.xyz)
Pills (Take Two) (lemmy.zip)
LLM queries for personal pdf libraries?
So perplexity can kind of weakly analyze the first few pages of small file size pdfs one at a time, but I’d love to have something that would allow me to upload several hundred research papers and textbooks that could then be analyzed for consensus and contradictions and give me more meaningful search results and summaries...
No mother, it's just the northern lights (lemmy.world)
My coat of arms (lemmy.world)
In what subtle (or significant) ways has your hometown changed since your childhood?
The penis mightier than the sword (lemmy.world)
Ewoks are just savage teddy bears (lemmy.world)
Some of my iterations are delightfully recursive (lemmy.world)
Don't tell him how Swabian Ravioli are called in the local dialect (lemmy.world)
Experiments (mander.xyz)
Measure of a merit badge (aussie.zone)
Transcription4 panels, arranged 2x2. The first panel contains a picture of Jean-Luc Picard holding up a frame containing a number of medals. It has the text “These are your Starfleet merit badges, are they not, Mr. Data?” The second panel has a picture of Data, with the text “Yes, Captain.” The third is a close image of...
Great giant sloth, of ice planet Hoth! (lemmy.world)
Which programming languages do you know?
What's the most fascinating documentary you've ever watched and why did it captivate you?
For me it’s gotta be something from ARTE (the French/German culture television channel). Either it’s the one about Chodorowskis weird Dune project or the three-part series about the history of racism. Both were extremely well-made documentaries.
Don't forget to tip your doctor for a good service (lemmy.world)
Franz Ferdinand's unfortunate Uber experience (lemmy.world)
Decapod Division's latest recruit reporting for duty (lemmy.world)