Computational Biochemist, PhD Student at the Heidelberg Institute for Theoretical Studies with a love for teaching, theatre, bouldering and movement.
Send otter videos 🦦
@cstross I see upon reading that the article states, "AI platforms like ChatGPT often hallucinate totally incorrectly [sic] answers out of thin air."
While this is true as far as it goes, I believe it misstates — and understates — the problem. A more accurate statement of the problem is, "Large language models hallucinate ALL of their responses. Some of the hallucinations merely happen to coincide well with reality." But you cannot obviously tell them from the ones that don't.
They do not understand anything. They are not designed for understanding. What they are designed to do is very specifically to generate grammatically correct output that looks convincing.
If you're looking for a new terminal because iTerm 2 added AI have a look at @wez's WezTerm, I can't recommend it highly enough.
Configuration is a bit of a learning curve, especially if you don't know Lua but if you spend a decent amount of time in your terminal it's worth the investment; you can make it exactly what you want it. Docs are pretty solid too.
OK, let's try something new. I'm not well connected because I'm bad at in person networking, and this is compounded by my decision to stop flying to conferences. So, can I use mastodon to find potential experimental colleagues who would like to work together?
Ideally for me, this would be people in Europe so I can visit by train, but it's not essential. I have some ideas for interesting projects and grant applications, and I'd love to develop those into concrete projects in close participation with experimental colleagues.
One of the main themes I'm interested in is how we can relate various neural mechanisms (e.g. inhibition, recurrence, nonlinear responses) to functions, using computational modelling to ask 'what if' questions that couldn't be answered by experiments alone.
I'm also interested in thinking about how we can use "information bottleneck" ideas to think more clearly about what computations networks of neurons are doing, going the next step beyond representing information to computing / discarding information.
A big question I'd like to answer is to find out how different brain regions work together in such a flexible and scalable way.
A technique I'm very excited about at the moment is using modern ML algorithms to train spiking neural networks at cognitively challenging tasks, making them directly comparable to both psychophysical and electrophysiological data.
Part of that could involve building in new mechanisms, like dendritic structure or neuromodulators into those networks and allowing the trained networks to make use of them in the best way possible.
I'd also love to build jointly motivated experimental and theoretical/synthetic datasets to test models against.
If any of that sounds interesting to you, take a look at some of my recent papers and get in touch. I'd love to hear from you.
For those who aren’t aware, Microsoft have decided to bake essentially an infostealer into base Windows OS and enable by default.
From the Microsoft FAQ: “Note that Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers."
Info is stored locally - but rather than something like Redline stealing your local browser password vault, now they can just steal the last 3 months of everything you’ve typed and viewed in one database.
Recall uses a bunch of services themed CAP - Core AI Platform. Enabled by default.
It spits constant screenshots (the product brands then “snapshots”, but they’re hooked screenshots) into the current user’s AppData as part of image storage.
The NPU processes them and extracts text, into a database file.
The database is SQLite, and you can access it as the user including programmatically. It 100% does not need physical access and can be stolen.
scratch, the 'language' for teachings kids the basics of #programming, has better first class support for async work than a bunch actual programming languages
Back home after a very busy week in San Francisco. I spoke with @caseynewton, @mmasnick, @harrymccracken, @laurengoode, @chrismessina, @Markoff and so many others. In an age where the old web is gradually getting killed off by AI, there is still hope for the future in the shape of the fediverse.
During lunch a friend mentioned that you can just supply a HTTP URL to vim on the command line and it would use curl to download that resource and allow you to edit the content. I jokingly asked whether if you enter :w it would then issue a HTTP POST back to the origin which is of course ridiculous.
Today, we launched our new Mastodon instance. It will ensure a privacy-focused space to engage with and get the latest from our Commissioners, departments, and the official voices of the Commission.
We want to thank @Mastodon for stewarding us and helping us make this possible.
Fostering European digital players is vital to our strategy for a stronger #DigitalEU.
This is a unique opportunity to grow the community even more. Let's get there!
Hi @nick! Thank you for your questions! Our old instance was created, hosted and managed by @EDPS as part of a pilot project for a Mastodon server. As the pilot project was coming to an end, we decided to build on its success by setting a permanent instance, ensuring the continued and uninterrupted presence of our institution on Mastodon. The new instance is hosted and managed by the European Commission and has no expiry date.
On a personal note: this release also marks (±4 days) the 10 year anniversary of the release that contained my first contributions. That was pandoc 1.12.4, released May 7th, 2014.
I'm happy that pandoc is still going strong. Many thanks to all users, contributors, and the community as a whole for making this such a pleasant experience!
Like many other technologists, I gave my time and expertise for free to #StackOverflow because the content was licensed CC-BY-SA - meaning that it was a public good. It brought me joy to help people figure out why their #ASR code wasn't working, or assist with a #CUDA bug.
Now that a deal has been struck with #OpenAI to scrape all the questions and answers in Stack Overflow, to train #GenerativeAI models, like #LLMs, without attribution to authors (as required under the CC-BY-SA license under which Stack Overflow content is licensed), to be sold back to us (the SA clause requires derivative works to be shared under the same license), I have issued a Data Deletion request to Stack Overflow to disassociate my username from my Stack Overflow username, and am closing my account, just like I did with Reddit, Inc.
The data I helped create is going to be bundled in an #LLM and sold back to me.
In a single move, Stack Overflow has alienated its community - which is also its main source of competitive advantage, in exchange for token lucre.
Stack Exchange, Stack Overflow's former instantiation, used to fulfill a psychological contract - help others out when you can, for the expectation that others may in turn assist you in the future. Now it's not an exchange, it's #enshittification.
Programmers now join artists and copywriters, whose works have been snaffled up to create #GenAI solutions.
The silver lining I see is that once OpenAI creates LLMs that generate code - like Microsoft has done with Copilot on GitHub - where will they go to get help with the bugs that the generative AI models introduce, particularly, given the recent GitClear report, of the "downward pressure on code quality" caused by these tools?
While this is just one more example of #enshittification, it's also a salient lesson for #DevRel folks - if your community is your source of advantage, don't upset them.
The most important take away is just how little space 16 cyclists take up while waiting at the lights. It's barely noticeable. It always seems like there are lots of cars, but when you count them there are actually very few.
You need really large numbers of cyclists before they're generally visible.
It's no wonder people say 'nobody uses the bike lanes' because if you're not actually counting it feels like nobody is.
The publisher of a small imprint of roleplaying games magazines/speculative fiction shuts down after 22 years because their submissions have been flooded with AI to the extent they cannot wade through them:
“The problem with AI is the people who use AI. These are people who think their ‘ideas’ are more important than the actual craft of writing, so they churn out all these ‘ideas’ and enter their idea prompts and think the output is a story.”
Ein weiteres forschungspolitisches Schwergewicht, das sich vor kurzem von X/Twitter verabschiedet hat, ist die All European Academy, ALLEA, ein Zusammenschluss von Akademien der Wissenschaften und ähnlichen Einrichtungen in Europa. https://allea.org/allea-ceases-activities-on-x-twitter/
Sie begründete das so:
„Unser Engagement für die akademische Freiheit und die Wissenschaft als globales öffentliches Gut steht im Mittelpunkt unserer Aktivitäten, und die derzeitige Politik von X steht nicht im Einklang mit unserem Auftrag.“ https://twitter.com/ALLEA_academies/status/1737751876521607295
Auf Mastodon ist sie aktiv unter: https://eupolicy.social/@ALLEA
In der Begründung bekennt sie sich zu den Grundsätzen des #Fediverse und #OpenScience:
„Seit Dezember 2022 ist die Freie Universität Berlin auf Mastodon vertreten. #Mastodon ist Teil des dezentralen Fediverse-Netzwerks. Im Vergleich zu großen kommerziellen sozialen Netzwerken setzt Mastodon auf chronologische Feeds und reduziert die algorithmische Sortierung von Beiträgen. Dies sorgt dafür, dass Informationen jederzeit offen zur Verfügung stehen und verringert die Wahrscheinlichkeit, dass sich sogenannte Filterblasen bilden.“