Today’s random stuffed alpaca photo is brought to you by Geb’s discount disinfectant: a fraction less disinfectanted for a fraction of the price! #alpaca#stuffed#photo#gebs
#Astronomy
Only braindead Python-impaired astronomers can design a REST (JSON) API, using the same keys, but with different value types, e.g.
"value”: true ... or
"value”: “closed” … or
"value: 123
What if I wore all black alpaca wool clothing for improved healthy and aesthetic and comfort, and then did a silly lil thing of running away and bouncing around on my bed and hiding behind the feather pillar!
#AI#GenerativeAI#LLMs#ChatGPT#Alpaca#GenderBias#RecommendationLetters: "Generative artificial intelligence has been touted as a valuable tool in the workplace. Estimates suggest it could increase productivity growth by 1.5 percent in the coming decade and boost global gross domestic product by 7 percent during the same period. But a new study advises that it should only be used with careful scrutiny—because its output discriminates against women.
The researchers asked two large language model (LLM) chatbots—ChatGPT and Alpaca, a model developed by Stanford University—to produce recommendation letters for hypothetical employees. In a paper shared on the preprint server arXiv.org, the authors analyzed how the LLMs used very different language to describe imaginary male and female workers.
“We observed significant gender biases in the recommendation letters,” says paper co-author Yixin Wan, a computer scientist at the University of California, Los Angeles. While ChatGPT deployed nouns such as “expert” and “integrity” for men, it was more likely to call women a “beauty” or “delight.” Alpaca had similar problems: men were “listeners” and “thinkers,” while women had “grace” and “beauty.” Adjectives proved similarly polarized. Men were “respectful,” “reputable” and “authentic,” according to ChatGPT, while women were “stunning,” “warm” and “emotional.” Neither OpenAI nor Stanford immediately responded to requests for comment from Scientific American."