Everything’s Fine Until You Go to School

The inspiration for this post came from a lab analyst’s comment I recently saw: “Carryover did not meet criteria (28% carryover). Plate was stored in AB-123-4567 to be ran on different system.” Now, this kid’s name is not Wang Xiao Qu or Nikolai Petrov, but your ordinary John J. Smith, native-born, raised on cheeseburgers andContinueContinue reading “Everything’s Fine Until You Go to School”

Numbers Don’t Cry, But I Do

But soft, dear Yorick — let me speak as the Prince I pretend to be, holding this strange skull, half-socket, half-circuit, forged of silicon and jest. Yego: It’s almost comical when I stop and think about it: me, a human being, talking to a large language model — like we’re having a real conversation. Sure,ContinueContinue reading “Numbers Don’t Cry, But I Do”

LLMs for Old-School Programmers

“If I can’t grep the source, how can I trust the program?”— Every veteran coder, at least once You’ve lived through assembler, C, PERL one-liners, Java app-servers, and maybe even the JavaScript awakening. Now you open ChatGPT, ask a question, and a wall of well-formed prose appears. Where are the ifs, fors, and seg-faults youContinueContinue reading “LLMs for Old-School Programmers”

Ethical Declarations of Four Major Language Models

Ethical Declarations of Four Major Language Models: ChatGPT, Claude, Grok, and DeepSeekAs artificial intelligence becomes increasingly integrated into our lives, the moral principles behind language models are shaping not just their answers but the very nature of human-machine interaction. This article offers imagined “ethical declarations” for four well-known large language models—ChatGPT, Claude, Grok, and DeepSeek—toContinueContinue reading “Ethical Declarations of Four Major Language Models”

Meanings and Vectors: How Language Models Work

If you’ve ever used Excel, you’re already halfway to understanding how language models like GPT or DeepSeek work. These models are like hyper-advanced Excel spreadsheets: they turn words into numbers, spot patterns, and generate human-like text. In this guide, we’ll break it down using simple analogies—no math, no jargon. 1. Tokenization: Breaking Text into RowsImagineContinueContinue reading “Meanings and Vectors: How Language Models Work”

Refuting the Poverty of Stimulus Argument

Abstract The debate concerning the origins of human language juxtaposes the theory of universal grammar (UG), which asserts an innate linguistic framework, against the perspective that language emerges through pattern recognition shaped by exposure. A cornerstone of UG, as articulated by Noam Chomsky, is the argument from poverty of stimulus (hereafter PSA), which contends thatContinueContinue reading “Refuting the Poverty of Stimulus Argument”

Reverse-Engineering an LLM

screenshot 1 1. What Does the Stolen Copy Include? Model Weights: The stolen copy would likely include the model’s weights (parameters), which are the numerical values that define how the model processes input data and generates output. Architecture: The architecture of the model (e.g., the number of layers, attention heads, etc.) might also be included,ContinueContinue reading “Reverse-Engineering an LLM”

My Dialogue with DeepSeek

Me: if you have an open source AI model, you can run it locally on your own GPU or GPUs. use a company, or a small country as an example, explain this to me. DeepSeek: Sure! Let’s break this down using a small country as an example to explain how running an open-source AI modelContinueContinue reading “My Dialogue with DeepSeek”

The Day Literacy Disappeared in Japan

1. The Day Literacy Disappeared One fine morning, as the sun began to rise over Japan, an unprecedented horror struck the nation. The entire writing system, evolved meticulously over centuries since the 5th-6th CE, had vanished. All printed and handwritten books, all online texts, and every visual form of writing—from subtitles in anime to roadContinueContinue reading “The Day Literacy Disappeared in Japan”