mariusor, 20 days ago @mattb to me it's pretty clear that you can't trust what an LLM produces exactly because it's not clear room, because it's a derivative of all the code it was trained on.
@mattb to me it's pretty clear that you can't trust what an LLM produces exactly because it's not clear room, because it's a derivative of all the code it was trained on.