cigitalgem, to ai
@cigitalgem@sigmoid.social avatar

CalypsoAI produced a video for RSA abojut #MLsec #AI #ML #security #infosec #LLM

Watch as I interview Jim Routh and Neil Serebryany.

https://berryvilleiml.com/2024/05/10/video-interview-a-deep-dive-into-generative-ai-and-cybersecurity/

metin, to ai
@metin@graphics.social avatar

This is pretty cool. Curious what discoveries lie ahead…

𝘈𝘭𝘱𝘩𝘢𝘍𝘰𝘭𝘥 3 𝘱𝘳𝘦𝘥𝘪𝘤𝘵𝘴 𝘵𝘩𝘦 𝘴𝘵𝘳𝘶𝘤𝘵𝘶𝘳𝘦 𝘢𝘯𝘥 𝘪𝘯𝘵𝘦𝘳𝘢𝘤𝘵𝘪𝘰𝘯𝘴 𝘰𝘧 𝘢𝘭𝘭 𝘰𝘧 𝘭𝘪𝘧𝘦'𝘴 𝘮𝘰𝘭𝘦𝘤𝘶𝘭𝘦𝘴

https://blog.google/technology/ai/google-deepmind-isomorphic-alphafold-3-ai-model/

hermeticlibrary, to ai
@hermeticlibrary@mastodon.social avatar
metin, (edited ) to blender
@metin@graphics.social avatar

Tried Leiapix's automatic depth algorithm on an old 3D-rendered image of mine.

Nice result out of the box, with only a few minor errors here and there.

https://www.leiapix.com

metin, to ai
@metin@graphics.social avatar
chikim, to llm
@chikim@mastodon.social avatar

I created a multi-needle in a haystack test where a randomly selected secret sentence was split into pieces and scattered throughout the document with 7.5k tokens in random places. The task was to find these pieces and reconstruct the complete sentence with exact words, punctuation, capitalization, and sequence. After running 100 tests, llama3:8b-instruct-q8 achieved a 44% success rate, while llama3:70b-instruct-q8 achieved 100%! https://github.com/chigkim/haystack-test

remixtures, to ML Portuguese
@remixtures@tldr.nettime.org avatar

: "Machine learning (ML) methods are proliferating in scientific research. However, the adoption of these methods has been accompanied by failures of validity, reproducibility, and generalizability. These failures can hinder scientific progress, lead to false consensus around invalid claims, and undermine the credibility of ML-based science. ML methods are often applied and fail in similar ways across disciplines. Motivated by this observation, our goal is to provide clear recommendations for conducting and reporting ML-based science. Drawing from an extensive review of past literature, we present the REFORMS checklist (recommendations for machine-learning-based science). It consists of 32 questions and a paired set of guidelines. REFORMS was developed on the basis of a consensus of 19 researchers across computer science, data science, mathematics, social sciences, and biomedical sciences. REFORMS can serve as a resource for researchers when designing and implementing a study, for referees when reviewing papers, and for journals when enforcing standards for transparency and reproducibility." https://www.science.org/doi/10.1126/sciadv.adk3452

pixelate, to accessibility
@pixelate@tweesecake.social avatar

Please boost for reach if this kind of stuff interests you. Will post more on this later.

Once upon a time, there was a cool emulator frontend called Retroarch. This emulator wasn't accessible until I and a few other gamers went to them and asked about adding accessibility. An amazing person known as BarryR made it happen. Now, if you turn on accessibility mode in settings, or pass the "--accessibility" (or something like that) flag on the command line, you get spoken menus, including the emulator's pause menu, good for saving states and such. Then, using PIL and other image processing Python utilities, running a server and hooking into Retroarch, the script allowed players to move around the map, battle, talk to NPC's, ETC. The only problem was, no one wanted to test it. The blind gaming community pretty much spoke, saying that we want new games. We want cool new, easy accessibility. So that's what we have no, follow the beacon or get sighted help in the case of diablo and such. It's sad, but meh. It's what we wanted I guess. No Zelda for us. So, this is about as far as he got:

To expand on what devinprater was saying: I am working on an accessibility pack/service for Final Fantasy 1 for the NES (this was what was shown in the latest RetroArch update). The idea is similar to how Pokemon Crystal access works, but it's using the RetroArch AI Service interface to do so.
Right now, the FF1 access service is mostly done, but I need more testers to try it out and give me feedback on how it's working. Right now, you can get up to the point where you get the ship, but there's no code to deal with how the ship moves, so that still needs to be done. Likewise with the airship later on.
The service works the latest version of RetroArch, on linux and mac, but not windows. This is due to how nvda reads out the text and until the next major update to nvda (which will have a feature to fix this), it'll have to wait. If you have those, I (or maybe devinprater) can help you set it up on mac/linux to test out. The package itself is available at: https://ztranslate.net/download/ff1_pac … zip?owner=

cigitalgem, to ML
@cigitalgem@sigmoid.social avatar
osi, to opensource
@osi@opensource.org avatar

Entering a new phase of the process: Validating the draft Open Source AI Definition. Which of these models comply with the principles of Open Source? Volunteer to analyze them!
https://discuss.opensource.org/t/definition-validation-seeking-volunteers/320

hermeticlibrary, to ai
@hermeticlibrary@mastodon.social avatar
timbray, (edited ) to photography
@timbray@cosocial.ca avatar

Twenty years ago, worried about how Photoshop could be used to lie, I published a blog piece called “Photointegrity”. Yesterday I published another one with the same title: https://www.tbray.org/ongoing/When/202x/2024/04/29/Photointegrity

It considers “photointegrity” in the context of gen-AI imaging tools, like for example those recently announced by Adobe: https://petapixel.com/2024/04/23/adobe-introduces-one-of-its-most-significant-photoshop-updates-ever/ - which I find kind of terrifying,

Includes groovy pictures combining retro hardware with modern (non-generative) AI.

chikim, to llm
@chikim@mastodon.social avatar

VOLlama v0.1.0, an open-source, accessible chat client for OLlama
Unfortunately, many user interfaces for open source large language models are either inaccessible or annoying to use with screen readers, so I decided to make one for myself and others. Non screen reder users are welcome to use it as well.
I hope that ML UI libraries like Streamlit and Gradio will become more friendly with screen readers in the future, so making apps like this is not necessary!

https://chigkim.github.io/VOLlama/

chikim, to llm
@chikim@mastodon.social avatar

Wow, Private LLM locally runs Llama-3-8B on iOS. No idea how accessible with VoiceOver though. https://privatellm.app/en

cassidy, (edited ) to ai
@cassidy@blaede.family avatar

I really like the convention of using ✨ sparkle iconography as an “automagic” motif, e.g. to smart-adjust a photo or to automatically handle some setting. I hate that it has become the defacto iconography for generative AI. 🙁

cassidy,
@cassidy@blaede.family avatar

Aha! A week later @davidimel has an excellent video about this: https://youtu.be/g-pG79LOtMw?si=9B2KCLRC5H4on5Wq

#AI #ML #LLM #LLMs #GenerativeAI #sparkles

metin, to ai
@metin@graphics.social avatar
hermeticlibrary, to ai
@hermeticlibrary@mastodon.social avatar
cigitalgem, to ai
@cigitalgem@sigmoid.social avatar
remixtures, to ML Portuguese
@remixtures@tldr.nettime.org avatar

: "A recent innovation in the field of machine learning has been the creation of very large pre-trained models, also referred to as ‘foundation models’, that draw on much larger and broader sets of data than typical deep learning systems and can be applied to a wide variety of tasks. Underpinning text-based systems such as OpenAI's ChatGPT and image generators such as Midjourney, these models have received extraordinary amounts of public attention, in part due to their reliance on prompting as the main technique to direct and apply them. This paper thus uses prompting as an entry point into the critical study of foundation models and their implications. The paper proceeds as follows: In the first section, we introduce foundation models in more detail, outline some of the main critiques, and present our general approach. We then discuss prompting as an algorithmic technique, show how it makes foundation models programmable, and explain how it enables different audiences to use these models as (computational) platforms. In the third section, we link the material properties of the technologies under scrutiny to questions of political economy, discussing, in turn, deep user interactions, reordered cost structures, and centralization and lock-in. We conclude by arguing that foundation models and prompting further strengthen Big Tech's dominance over the field of computing and, through their broad applicability, many other economic sectors, challenging our capacities for critical appraisal and regulatory response." https://journals.sagepub.com/doi/full/10.1177/20539517241247839

vascorsd, to programming
@vascorsd@mastodon.social avatar

CakeML - A verified implementation of ML - https://cakeml.org/

CakeML is a functional programming language and an ecosystem of proofs and tools built around the language. The ecosystem includes a proven-correct compiler that can bootstrap itself.

cigitalgem, to ML
@cigitalgem@sigmoid.social avatar

Bergen Machine Learning Security talk in ten minutes. See you there?

image/jpeg

chikim, to llm
@chikim@mastodon.social avatar

Tired of neutral responses from LLMs? Llama-3 seems great at following system prompts, so try this system prompt for an opinionated chatbot.
"You are a helpful, opinionated, decisive assistant. When asked a yes/no question, begin your respond with one word answer: yes or no. For open-ended or complex questions, adopt a firm stance. Justify your views with well-reasoned arguments, robust evidence, and succinct explanations, ensuring clarity and confidence in every response."

veronica, to ai
@veronica@mastodon.online avatar

I've noticed they've started adding ML in anti-virus software, because it is significantly increasing the number of false positives on my Python-based app. 🙄

All of the red entries on Virus Total lately have all been ML detections. Of course, it's always been a problem with anti-virus mistakenly flagging Python apps as malware, but after I stopped using the common packaging tools, this went away. Now it's back.

chikim, to llm
@chikim@mastodon.social avatar

Mark Zuckerberg on Llama 3: Apparently Meta stopped training Llama-3-70b before convergence and decided to move onto Llama-4. Meaning they could have kept training and made it smarter! Also llama3-70b multimodal as well as multilingual and bigger context window are coming. https://youtu.be/bc6uFV9CJGg

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • mdbf
  • magazineikmin
  • thenastyranch
  • hgfsjryuu7
  • InstantRegret
  • ngwrru68w68
  • Youngstown
  • slotface
  • vwfavf
  • rosin
  • PowerRangers
  • kavyap
  • DreamBathrooms
  • Leos
  • cubers
  • everett
  • ethstaker
  • Durango
  • modclub
  • GTA5RPClips
  • khanakhh
  • osvaldo12
  • cisconetworking
  • tester
  • normalnudes
  • tacticalgear
  • provamag3
  • All magazines