Trying something new, everyone is guaranteed an interview! Open interviews! For a limited time no one will be skipped (except for clear cases of abuse).
So we still have about 10 more 100% remote positions to hire for full-time market-fair positions here at QOTO/CleverThis.
100% remote, work from anywhere, even the beach, market-fair offers. Ethics first, we treat our people like family.
We have an urgent need for Machine learning experts with a background in NLP and Deep Learning (Natural Language Processing and Neural Networks). There is a focus on Knowledge Graphs, Mathematics, Java, C, looking for Polyglots.
We are an open-source first company, we give back heavily to the OSS community.
We need everything from jr to sr, data scientist to programmer. If your IT and your good, you might be a fit.
I will personally be both your direct boss, and hiring manager. I am also the founder and inventor.
The NLP position can be found at this link, other positions can be found on the menu bar on the left:
If you would like to submit yourself for an interview, which for a limited time I am guaranteeing you will get a first stage interview, then you can submit your application here, and even schedule your interview as you apply, instantly!
Please boost for reach if this kind of stuff interests you. Will post more on this later.
Once upon a time, there was a cool emulator frontend called Retroarch. This emulator wasn't accessible until I and a few other gamers went to them and asked about adding accessibility. An amazing person known as BarryR made it happen. Now, if you turn on accessibility mode in settings, or pass the "--accessibility" (or something like that) flag on the command line, you get spoken menus, including the emulator's pause menu, good for saving states and such. Then, using PIL and other image processing Python utilities, running a server and hooking into Retroarch, the script allowed players to move around the map, battle, talk to NPC's, ETC. The only problem was, no one wanted to test it. The blind gaming community pretty much spoke, saying that we want new games. We want cool new, easy accessibility. So that's what we have no, follow the beacon or get sighted help in the case of diablo and such. It's sad, but meh. It's what we wanted I guess. No Zelda for us. So, this is about as far as he got:
To expand on what devinprater was saying: I am working on an accessibility pack/service for Final Fantasy 1 for the NES (this was what was shown in the latest RetroArch update). The idea is similar to how Pokemon Crystal access works, but it's using the RetroArch AI Service interface to do so.
Right now, the FF1 access service is mostly done, but I need more testers to try it out and give me feedback on how it's working. Right now, you can get up to the point where you get the ship, but there's no code to deal with how the ship moves, so that still needs to be done. Likewise with the airship later on.
The service works the latest version of RetroArch, on linux and mac, but not windows. This is due to how nvda reads out the text and until the next major update to nvda (which will have a feature to fix this), it'll have to wait. If you have those, I (or maybe devinprater) can help you set it up on mac/linux to test out. The package itself is available at: https://ztranslate.net/download/ff1_pac … zip?owner=
Tired of neutral responses from LLMs? Llama-3 seems great at following system prompts, so try this system prompt for an opinionated chatbot.
"You are a helpful, opinionated, decisive assistant. When asked a yes/no question, begin your respond with one word answer: yes or no. For open-ended or complex questions, adopt a firm stance. Justify your views with well-reasoned arguments, robust evidence, and succinct explanations, ensuring clarity and confidence in every response." #LLM#AI#ML
A study that confirms what I’ve been suspecting for a while: fine-tuning a #LLM with new knowledge increases its tendency to hallucinate.
If the new knowledge wasn’t provided in the original training set, then the model has to shift its weights from their previous optimal state to a new state that has to accommodate both the previous and new knowledge - and it may not necessarily be optimal.
Without a new validation round against the whole previous cross-validation and test sets, that’s just likely to increase the chances for the model to go off the tangent.
VOLlama v0.1.0, an open-source, accessible chat client for OLlama
Unfortunately, many user interfaces for open source large language models are either inaccessible or annoying to use with screen readers, so I decided to make one for myself and others. Non screen reder users are welcome to use it as well.
I hope that ML UI libraries like Streamlit and Gradio will become more friendly with screen readers in the future, so making apps like this is not necessary! #LLM#AI#ML https://chigkim.github.io/VOLlama/
When you choose to use an #ML#LLM foundation model, you accept the risk management decisions made by the vendor without your input. Wonder what they are? Read this #MLsec paper from #IEEE computer.
I created a multi-needle in a haystack test where a randomly selected secret sentence was split into pieces and scattered throughout the document with 7.5k tokens in random places. The task was to find these pieces and reconstruct the complete sentence with exact words, punctuation, capitalization, and sequence. After running 100 tests, llama3:8b-instruct-q8 achieved a 44% success rate, while llama3:70b-instruct-q8 achieved 100%! #LLM#AI#MLhttps://github.com/chigkim/haystack-test
GPT4O combines audio, image, text. It can analyze actual audio. Also you can interrupt voice. It can pick up emotion from audio. You can also ask different speech with diffetrent style including singing! It can see the image in real time and chat on voice. For example solving equation in real time as you write on paper. This is amazing! #LLM#AI#GPT#ML#OpenAI
So they try to use machine learning to directly generate movies? Meanwhile as I understand it there are still really tedious tasks in VFX that really could use better automation. Are there any efforts to use AI for better green screen software? Handling motion blur, translucent materials and reflections? Automatically fixing green spill? Wouldn't that be an obvious thing to develop that would be bought like warm bread rolls? (I'm just a curious outsider to that, wondering.) #AI#ML#VFX
I was curious if a niche blog post of mine had been slurped up by #ChatGPT so I asked a leading question—what I discovered is much worse. So far, it has told me:
• use apt-get on Endless OS
• preview a Jekyll site locally by opening files w/a web browser (w/o building)
• install several non-existent #Flatpak “packages” & extensions
It feels exactly like chatting w/someone talking out of their ass but trying to sound authoritative. #LLMs need to learn to say, “I don’t know.”
"Artificial intelligence is likely to impact 40 % of jobs worldwide, and 60 % of jobs in advanced economies, such as the US and UK.
If managed poorly, it could be quite the doomsday scenario. It could bring a tremendous increase in productivity if we manage it well, but it can also lead to more misinformation and, of course, more inequality in our society."
I wonder which of those scenarios it will turn out to be. 🤔 😏