DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models (decodingtrust.github.io)
DecodingTrust is the Adversarial GLUE Benchmark. DecodingTrust aims at providing a thorough assessment of trustworthiness in GPT models....
DecodingTrust is the Adversarial GLUE Benchmark. DecodingTrust aims at providing a thorough assessment of trustworthiness in GPT models....
Here's some preliminary work from Microsoft from 2022 that incorporates OpenAI's Codex model to make NPCs that can interact with the player using natural language instructions. It works by defining an API of functions the bot can use, then having Codex generate function calls in response to the player's instructions....
Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic...
Crosspost from m/ArtificialIntelligence
This is an article I wrote in 2019 about modeling and entropy. It's something I consider daily as I work on modeling in the drug discovery space.
In several practical applications of federated learning (FL), the clients are highly heterogeneous in terms of both their data and compute resources, and therefore enforcing the same model architecture for each client is very limiting. Moreover, the need for uncertainty quantification and data privacy constraints are often...
Large language models (LLMs) are remarkable data annotators. They can be used to generate high-fidelity supervised training data, as well as survey and experimental data. With the widespread adoption of LLMs, human gold--standard annotations are key to understanding the capabilities of LLMs and the validity of their results....
Code, paper, and online demos available
Crosspost from m/ai
We're pleased to introduce QuIP#, a new SOTA LLM quantization method that uses incoherence processing from QuIP (the paper) & lattices to achieve 2 bit LLMs with near-fp16 performance! Now you can run LLaMA 2 70B on a 24G GPU w/out offloading!...
arXiv: https://arxiv.org/abs/2312.04474...
Paper: https://arxiv.org/abs/2312.06635...
Arxiv...
cross-posted from: slrpnk.net/post/3892266...