Nina Panickssery
Subscribe
Sign in
Home
Fiction
AI
Humor
Parenting
Practical
Book Reviews
Culture
Personal
Tech
Philosophy
Archive
tech
Why do LLMs hallucinate?
And how can we fix it?
Jul 11
•
Nina Panickssery
3
Share this post
Nina Panickssery
Why do LLMs hallucinate?
Copy link
Facebook
Email
Notes
More
1
Why human experts may be useful for longer than you think
On the usefulness of humans in the loop for AI alignment and robustness
Jun 7
•
Nina Panickssery
8
Share this post
Nina Panickssery
Why human experts may be useful for longer than you think
Copy link
Facebook
Email
Notes
More
On optimizing for intelligibility to humans
Correctness is not everything
May 14
•
Nina Panickssery
8
Share this post
Nina Panickssery
On optimizing for intelligibility to humans
Copy link
Facebook
Email
Notes
More
1
An LLM CodeForces champion is not taking your software engineering job (yet)
A naïve view of the eval results could lead you to overestimate the current state of AI capability
Apr 27
•
Nina Panickssery
5
Share this post
Nina Panickssery
An LLM CodeForces champion is not taking your software engineering job (yet)
Copy link
Facebook
Email
Notes
More
Tools I use for side projects
Maybe this is useful to you
Jan 2
6
Share this post
Nina Panickssery
Tools I use for side projects
Copy link
Facebook
Email
Notes
More
Git
Peeking under the hood
Dec 29, 2023
6
Share this post
Nina Panickssery
Git
Copy link
Facebook
Email
Notes
More
Share
Copy link
Facebook
Email
Notes
More
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts