freemo, to ArtificialIntelligence
@freemo@qoto.org avatar

Please reboost!

Trying something new, everyone is guaranteed an interview! Open interviews! For a limited time no one will be skipped (except for clear cases of abuse).

So we still have about 10 more 100% remote positions to hire for full-time market-fair positions here at QOTO/CleverThis.

100% remote, work from anywhere, even the beach, market-fair offers. Ethics first, we treat our people like family.

We have an urgent need for Machine learning experts with a background in NLP and Deep Learning (Natural Language Processing and Neural Networks). There is a focus on Knowledge Graphs, Mathematics, Java, C, looking for Polyglots.

We are an open-source first company, we give back heavily to the OSS community.

We need everything from jr to sr, data scientist to programmer. If your IT and your good, you might be a fit.

I will personally be both your direct boss, and hiring manager. I am also the founder and inventor.

The NLP position can be found at this link, other positions can be found on the menu bar on the left:

https://docs.cleverthis.com/en/human_resources/organizational_structure/sr_data_scientist_(nlp)

If you would like to submit yourself for an interview, which for a limited time I am guaranteeing you will get a first stage interview, then you can submit your application here, and even schedule your interview as you apply, instantly!

https://cal.com/team/cleverthis/interview-stage-1

For those of you who cant schedule during core hours you can schedule in my free time if you’d like a chance (the company doesnt have fixed hours):

https://cal.com/team/cleverthis/interview-stage-1-extended

pixelate, to accessibility
@pixelate@tweesecake.social avatar

Please boost for reach if this kind of stuff interests you. Will post more on this later.

Once upon a time, there was a cool emulator frontend called Retroarch. This emulator wasn't accessible until I and a few other gamers went to them and asked about adding accessibility. An amazing person known as BarryR made it happen. Now, if you turn on accessibility mode in settings, or pass the "--accessibility" (or something like that) flag on the command line, you get spoken menus, including the emulator's pause menu, good for saving states and such. Then, using PIL and other image processing Python utilities, running a server and hooking into Retroarch, the script allowed players to move around the map, battle, talk to NPC's, ETC. The only problem was, no one wanted to test it. The blind gaming community pretty much spoke, saying that we want new games. We want cool new, easy accessibility. So that's what we have no, follow the beacon or get sighted help in the case of diablo and such. It's sad, but meh. It's what we wanted I guess. No Zelda for us. So, this is about as far as he got:

To expand on what devinprater was saying: I am working on an accessibility pack/service for Final Fantasy 1 for the NES (this was what was shown in the latest RetroArch update). The idea is similar to how Pokemon Crystal access works, but it's using the RetroArch AI Service interface to do so.
Right now, the FF1 access service is mostly done, but I need more testers to try it out and give me feedback on how it's working. Right now, you can get up to the point where you get the ship, but there's no code to deal with how the ship moves, so that still needs to be done. Likewise with the airship later on.
The service works the latest version of RetroArch, on linux and mac, but not windows. This is due to how nvda reads out the text and until the next major update to nvda (which will have a feature to fix this), it'll have to wait. If you have those, I (or maybe devinprater) can help you set it up on mac/linux to test out. The package itself is available at: https://ztranslate.net/download/ff1_pac … zip?owner=

chikim, to llm
@chikim@mastodon.social avatar

Tired of neutral responses from LLMs? Llama-3 seems great at following system prompts, so try this system prompt for an opinionated chatbot.
"You are a helpful, opinionated, decisive assistant. When asked a yes/no question, begin your respond with one word answer: yes or no. For open-ended or complex questions, adopt a firm stance. Justify your views with well-reasoned arguments, robust evidence, and succinct explanations, ensuring clarity and confidence in every response."

fabio, to llm
@fabio@manganiello.social avatar

A study that confirms what I’ve been suspecting for a while: fine-tuning a with new knowledge increases its tendency to hallucinate.

If the new knowledge wasn’t provided in the original training set, then the model has to shift its weights from their previous optimal state to a new state that has to accommodate both the previous and new knowledge - and it may not necessarily be optimal.

Without a new validation round against the whole previous cross-validation and test sets, that’s just likely to increase the chances for the model to go off the tangent.

@ai

https://arxiv.org/abs/2405.05904

embedded, to embedded
@embedded@mstdn.social avatar

Kwabena Agyeman spoke with Elecia( @logicalelegance ) and Chris( @stoneymonster ) about optimization, cameras, machine learning, and vision systems.

Join us for another informative episode of Embedded here: https://embedded.fm/episodes/477 .

#embedded #camera #opensource #computervision #ML #wasps

chikim, to llm
@chikim@mastodon.social avatar

VOLlama v0.1.0, an open-source, accessible chat client for OLlama
Unfortunately, many user interfaces for open source large language models are either inaccessible or annoying to use with screen readers, so I decided to make one for myself and others. Non screen reder users are welcome to use it as well.
I hope that ML UI libraries like Streamlit and Gradio will become more friendly with screen readers in the future, so making apps like this is not necessary!

https://chigkim.github.io/VOLlama/

cigitalgem, to llm
@cigitalgem@sigmoid.social avatar

Giving my Risks talk (BIML work) today for Google on a global forum. If you work for Google, hop on!

Noon Eastern....hosted by Zurich.

cigitalgem, to ML
@cigitalgem@sigmoid.social avatar

I am speaking tonight at the #ISSA NOVA chapter meeting. Meeting starts at 5:30 in Reston at the Microsoft building.

10, 23, 81 — Stacking up the LLM Risks: Applied Machine Learning Security

https://www.issa-nova.org/may-16-530pm-dr-gary-mcgraw-on-stacking-up-the-llm-risks-applied-machine-learning-security/

#MLsec #ML #AI #LLM

cigitalgem, to ML
@cigitalgem@sigmoid.social avatar

When you choose to use an #ML #LLM foundation model, you accept the risk management decisions made by the vendor without your input. Wonder what they are? Read this #MLsec paper from #IEEE computer.

https://berryvilleiml.com/2024/05/16/how-to-regulate-llms/

osi, to opensource
@osi@opensource.org avatar

Entering a new phase of the process: Validating the draft Open Source AI Definition. Which of these models comply with the principles of Open Source? Volunteer to analyze them!
https://discuss.opensource.org/t/definition-validation-seeking-volunteers/320

chikim, to llm
@chikim@mastodon.social avatar

I created a multi-needle in a haystack test where a randomly selected secret sentence was split into pieces and scattered throughout the document with 7.5k tokens in random places. The task was to find these pieces and reconstruct the complete sentence with exact words, punctuation, capitalization, and sequence. After running 100 tests, llama3:8b-instruct-q8 achieved a 44% success rate, while llama3:70b-instruct-q8 achieved 100%! https://github.com/chigkim/haystack-test

cigitalgem, to ai
@cigitalgem@sigmoid.social avatar

CalypsoAI produced a video for RSA abojut #MLsec #AI #ML #security #infosec #LLM

Watch as I interview Jim Routh and Neil Serebryany.

https://berryvilleiml.com/2024/05/10/video-interview-a-deep-dive-into-generative-ai-and-cybersecurity/

cigitalgem, to llm
@cigitalgem@sigmoid.social avatar

Recursive pollution is a very serious risk. At BIML, we slate it as number one. DO NOT EAT YOUR OWN BRAINS.

https://www.ft.com/content/053ee253-820e-453a-a1d5-0f24985258de

chikim, to llm
@chikim@mastodon.social avatar

GPT4O combines audio, image, text. It can analyze actual audio. Also you can interrupt voice. It can pick up emotion from audio. You can also ask different speech with diffetrent style including singing! It can see the image in real time and chat on voice. For example solving equation in real time as you write on paper. This is amazing!

bloody_albatross, to ai
@bloody_albatross@chaos.social avatar

So they try to use machine learning to directly generate movies? Meanwhile as I understand it there are still really tedious tasks in VFX that really could use better automation. Are there any efforts to use AI for better green screen software? Handling motion blur, translucent materials and reflections? Automatically fixing green spill? Wouldn't that be an obvious thing to develop that would be bought like warm bread rolls? (I'm just a curious outsider to that, wondering.)

cigitalgem, to ML
@cigitalgem@sigmoid.social avatar

Wonder how the goverment should regulate LLMs? Here's how. #MLsec #ML #AI #LLM

https://berryvilleiml.com/2024/05/16/how-to-regulate-llms/

timbray, (edited ) to photography
@timbray@cosocial.ca avatar

Twenty years ago, worried about how Photoshop could be used to lie, I published a blog piece called “Photointegrity”. Yesterday I published another one with the same title: https://www.tbray.org/ongoing/When/202x/2024/04/29/Photointegrity

It considers “photointegrity” in the context of gen-AI imaging tools, like for example those recently announced by Adobe: https://petapixel.com/2024/04/23/adobe-introduces-one-of-its-most-significant-photoshop-updates-ever/ - which I find kind of terrifying,

Includes groovy pictures combining retro hardware with modern (non-generative) AI.

cigitalgem, to ML
@cigitalgem@sigmoid.social avatar
hermeticlibrary, to ai
@hermeticlibrary@mastodon.social avatar
cassidy, to ChatGPT
@cassidy@blaede.family avatar

I was curious if a niche blog post of mine had been slurped up by so I asked a leading question—what I discovered is much worse. So far, it has told me:

• use apt-get on Endless OS
• preview a Jekyll site locally by opening files w/a web browser (w/o building)
• install several non-existent “packages” & extensions

It feels exactly like chatting w/someone talking out of their ass but trying to sound authoritative. need to learn to say, “I don’t know.”

metin, to ai
@metin@graphics.social avatar
metin, to ai
@metin@graphics.social avatar

"Artificial intelligence is likely to impact 40 % of jobs worldwide, and 60 % of jobs in advanced economies, such as the US and UK.

If managed poorly, it could be quite the doomsday scenario. It could bring a tremendous increase in productivity if we manage it well, but it can also lead to more misinformation and, of course, more inequality in our society."

I wonder which of those scenarios it will turn out to be. 🤔 😏

https://www.theregister.com/2024/05/15/imf_boss_ai_tsunami/

LChoshen, to llm
@LChoshen@sigmoid.social avatar

Do LLMs learn foundational concepts required to build world models? (less than expected)

We address this question with 🌐🐨EWoK (Elements of World Knowledge)🐨🌐

a flexible cognition-inspired framework to test knowledge across physical and social domains

https://ewok-core.github.io

metin, (edited ) to ai
@metin@graphics.social avatar
cigitalgem, to ML
@cigitalgem@sigmoid.social avatar

Very excited to play with GPT-Z37a++. I am sure the new name makes all the difference!

Look over there...

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • InstantRegret
  • magazineikmin
  • modclub
  • khanakhh
  • Youngstown
  • rosin
  • mdbf
  • slotface
  • Durango
  • ngwrru68w68
  • thenastyranch
  • kavyap
  • DreamBathrooms
  • JUstTest
  • cubers
  • osvaldo12
  • Leos
  • anitta
  • everett
  • ethstaker
  • GTA5RPClips
  • tester
  • cisconetworking
  • megavids
  • tacticalgear
  • normalnudes
  • lostlight
  • All magazines