upol, to ai
@upol@hci.social avatar

We have an exciting main event at #HCXAI at #chi2024 today!

We have @janethaven from @datasociety and
Kush Varshney from @ibmresearch for an invigorating discussion on AI governance and policymaking to take Explainable AI beyond academia.

w/ @Riedl @sunniesuhyoung @nielsvanberkel

#AI #ResponsibleAI #ExplainableAI #XAI #academia

FraunhoferFOKUS, to ArtificialIntelligence German
@FraunhoferFOKUS@social.bund.de avatar

🔎 Blackbox KI – 🇪🇺 EU-Forschungsprojekt blickt in das "Innenleben" industrieller : Im Rahmen des Projekts XMANAI haben 15 Forschungspartner eine Plattform entwickelt, die den transparenten Einsatz von industrieller KI unterstützt: ➡️ https://www.fokus.fraunhofer.de/de/news/dps/xmanai_2024_04





jeppe, to random Danish
@jeppe@uddannelse.social avatar

"Hvorfor jeg ikke vil bruge ChatGPT"

Min gode kollega, @elisanadire, har stærke argumenter, men jeg er ikke overbevist nok til ikke at bruge chatgpt i ny og næ...

https://www.version2.dk/holdning/hvorfor-jeg-ikke-vil-bruge-chatgpt

Ove,
@Ove@uddannelse.social avatar

@jeppe @elisanadire Der er mange problemer med LLMs og de forretningsmodeller, de anvender. Men jeg er ikke enig i, at bias er et problem. Det er det vel kun, hvis man betragter det som en vidensressource, hvilket det ikke er. At man ikke ved, hvorfor en tekst bliver genereret - at man altså ikke kan tjekke vejen fra input til output er et særskilt problem, der begrænser brugen af den - især ift. beslutningsstøtte.

upol, to random
@upol@hci.social avatar

📢 5 days till deadline for the Human-centered Explainable AI (#HCXAI) workshop at CHI! Pls repost & help us spread the word 🙏

Submission pro tips:

  1. Explicitly address >1 question (CfP) on the website.

  2. Yes, papers NOT dealing with LLMs are fine.

  3. Engage w/ past submissions (build on, don't repeat)

  4. Position papers must make a well-justified argument, not just summarize findings

💌 w/ @Riedl @sunniesuhyoung

#academia #AI #hci #XAI #ExplainableAI

https://hcxai.jimdosite.com/

isws, to LLMs

This is the International SemanticWeb Research Summer School (ISWS) tooting! ISWS is a full immersion, super intensive one-week experience including lectures and keynotes from outstanding speakers, a “learning by doing” teamwork program on open research problems, under the guidance of the best scientists in the field.
website: https://2024.semanticwebschool.org/


@lysander07 @tabea @sashabruns @MahsaVafaie @fizise

janeadams, to Futurology
@janeadams@vis.social avatar

Anyone work at a company they like that's hiring interns for summer 2024? Interested in roles related to

I'm a third year PhD student in Computer Science at an R1 university (completed MS coursework), and a dual citizen USA/EU :) Thanks for boosting!

tero, to llm
@tero@rukii.net avatar

for systems whether chatbots or backend components is very important for trustworthiness and debugging.

One of the most effective ways to do it is to make the chatbot refer to sources of facts. Tag every RAG document partial with an anchor, and tell the chatbot refer to them. Then you can show in a final presentation a link to the document part used.

In a document index you similarly want to include metadata for partials, so that a user can click "show the whole document" and navigate the information if they want to see where the content came from.

It's all very well explainable but requires a bit of work in and .

upol, to mastodon
@upol@hci.social avatar

All of XAI is basically Backstreet Boys

upol, to ai
@upol@hci.social avatar

Explainable AI (XAI) folks, looking for pointers to the biggest failures of popular techniques.

E.g., saliency maps not working for doctors.

Papers, articles, blogs == all fair game.

Self-plugs are encouraged!

Please boost/repost and help!

itnewsbot, to security
@itnewsbot@schleuss.online avatar

How Important Is Explainability in Cybersecurity AI? - Artificial intelligence is transforming many industries but few as dramatically as... - https://readwrite.com/how-important-is-explainability-in-cybersecurity-ai/

upol, (edited ) to academia
@upol@hci.social avatar

If nothing, at the very least, scientific research will humble you.

Most of my life's work in one page.

A lot of work done.

Yet so much more to do.

It's a humbling feeling.

upol, to ai
@upol@hci.social avatar

There are times you need to go back to the drawing board and start all over.

For the last few weeks, I've been doing a meta-analysis of my work. Put the key parts of all my papers in one place and started distilling the findings till I reached saturation.

I'm starting to see my own work in a new light. This process has been revealing. I strongly recommend it to all scholars and researchers.

#ExplainableAI #HCXAI #AI #AcademicChatter

video/mp4

nyagnik, to ai

Hi all!
Does your work involve providing explainability and/or transparency for machine learning systems? We are a team of HCI researchers at UCSD, who would like to interview you about your experience, process, and any problems you run into, particularly in how you evaluate your tools and explanations. The interview takes ~30 minutes, and you will be compensated $15.50/hour for your time. Please sign up for a time using this link: https://calendly.com/nyagnik/xai-interview

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system’s entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system’s life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements. The seven requirements (human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) are analyzed from a triple perspective: What each requirement for trustworthy AI is, Why it is needed, and How each requirement can be implemented in practice. On the other hand, a practical approach to implement trustworthy AI systems allows defining the concept of responsibility of AI-based systems facing the law, through a given auditing process. Therefore, a responsible AI system is the resulting notion we introduce in this work, and a concept of utmost necessity that can be realized through auditing processes, subject to the challenges posed by the use of regulatory sandboxes. Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI. Our reflections in this matter conclude that regulation is a key for reaching a consensus among these views, and that trustworthy and responsible AI systems will be crucial.."

https://www.sciencedirect.com/science/article/pii/S1566253523002129

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • khanakhh
  • magazineikmin
  • InstantRegret
  • tacticalgear
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • modclub
  • everett
  • ngwrru68w68
  • anitta
  • Durango
  • osvaldo12
  • normalnudes
  • cubers
  • ethstaker
  • mdbf
  • provamag3
  • GTA5RPClips
  • cisconetworking
  • Leos
  • tester
  • megavids
  • lostlight
  • All magazines