Nina Panickssery

Nina Panickssery

Home
Fiction
AI
Humor
Parenting
Practical
Book Reviews
Culture
Personal
Tech
Philosophy
Archive

tech

Why do LLMs hallucinate?
And how can we fix it?
Jul 11 • 
Nina Panickssery
3

Share this post

Nina Panickssery
Nina Panickssery
Why do LLMs hallucinate?
1
Why human experts may be useful for longer than you think
On the usefulness of humans in the loop for AI alignment and robustness
Jun 7 • 
Nina Panickssery
8

Share this post

Nina Panickssery
Nina Panickssery
Why human experts may be useful for longer than you think
On optimizing for intelligibility to humans
Correctness is not everything
May 14 • 
Nina Panickssery
8

Share this post

Nina Panickssery
Nina Panickssery
On optimizing for intelligibility to humans
1
An LLM CodeForces champion is not taking your software engineering job (yet)
A naïve view of the eval results could lead you to overestimate the current state of AI capability
Apr 27 • 
Nina Panickssery
5

Share this post

Nina Panickssery
Nina Panickssery
An LLM CodeForces champion is not taking your software engineering job (yet)
Tools I use for side projects
Maybe this is useful to you
Jan 2
6

Share this post

Nina Panickssery
Nina Panickssery
Tools I use for side projects
Git
Peeking under the hood
Dec 29, 2023
6

Share this post

Nina Panickssery
Nina Panickssery
Git
© 2025 Nina Panickssery
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share