We have an exciting main event at #HCXAI at #chi2024 today!
We have @janethaven from @datasociety and
Kush Varshney from @ibmresearch for an invigorating discussion on AI governance and policymaking to take Explainable AI beyond academia.
🔎 Blackbox KI – 🇪🇺 EU-Forschungsprojekt blickt in das "Innenleben" industrieller #KI: Im Rahmen des Projekts XMANAI haben 15 Forschungspartner eine Plattform entwickelt, die den transparenten Einsatz von industrieller KI unterstützt: ➡️ https://www.fokus.fraunhofer.de/de/news/dps/xmanai_2024_04
@jeppe@elisanadire Der er mange problemer med LLMs og de forretningsmodeller, de anvender. Men jeg er ikke enig i, at bias er et problem. Det er det vel kun, hvis man betragter det som en vidensressource, hvilket det ikke er. At man ikke ved, hvorfor en tekst bliver genereret - at man altså ikke kan tjekke vejen fra input til output er et særskilt problem, der begrænser brugen af den - især ift. beslutningsstøtte. #ExplainableAI#GenKI#skolechat
This is the International SemanticWeb Research Summer School (ISWS) tooting! ISWS is a full immersion, super intensive one-week experience including lectures and keynotes from outstanding speakers, a “learning by doing” teamwork program on open research problems, under the guidance of the best scientists in the field.
website: https://2024.semanticwebschool.org/
I'm a third year PhD student in Computer Science at an R1 university (completed MS coursework), and a dual citizen USA/EU :) Thanks for boosting! #GetFediHired
#ExplainableAI for #LLM systems whether chatbots or backend components is very important for trustworthiness and debugging.
One of the most effective ways to do it is to make the chatbot refer to sources of facts. Tag every RAG document partial with an anchor, and tell the chatbot refer to them. Then you can show in a final presentation a link to the document part used.
In a document index you similarly want to include metadata for partials, so that a user can click "show the whole document" and navigate the information if they want to see where the content came from.
There are times you need to go back to the drawing board and start all over.
For the last few weeks, I've been doing a meta-analysis of my work. Put the key parts of all my papers in one place and started distilling the findings till I reached saturation.
I'm starting to see my own work in a new light. This process has been revealing. I strongly recommend it to all scholars and researchers.
Hi all!
Does your work involve providing explainability and/or transparency for machine learning systems? We are a team of HCI researchers at UCSD, who would like to interview you about your experience, process, and any problems you run into, particularly in how you evaluate your tools and explanations. The interview takes ~30 minutes, and you will be compensated $15.50/hour for your time. Please sign up for a time using this link: https://calendly.com/nyagnik/xai-interview #XAI#AI#ML#ExplainableAI
#AI#TrustworthyAI#ResponsibleAI#AIEthics#Explainability#ExplainableAI: "Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system’s entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system’s life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements. The seven requirements (human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) are analyzed from a triple perspective: What each requirement for trustworthy AI is, Why it is needed, and How each requirement can be implemented in practice. On the other hand, a practical approach to implement trustworthy AI systems allows defining the concept of responsibility of AI-based systems facing the law, through a given auditing process. Therefore, a responsible AI system is the resulting notion we introduce in this work, and a concept of utmost necessity that can be realized through auditing processes, subject to the challenges posed by the use of regulatory sandboxes. Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI. Our reflections in this matter conclude that regulation is a key for reaching a consensus among these views, and that trustworthy and responsible AI systems will be crucial.."