What Can Humans Learn FROM AI Models?
Exploring what humans can learn from AI models like ChatGPT, examining both technical and philosophical implications of human-AI interaction.

Exploring what humans can learn from AI models like ChatGPT, examining both technical and philosophical implications of human-AI interaction.

Job seekers should focus on securing a single quality offer rather than pursuing multiple positions simultaneously.

Some advice on LLM job hunt in 2023, covering preparation, strategy, and understanding the nuances of the job market.

Ideas on how to redirect technology and gaming addiction toward productive outcomes, from AI-powered education to elderly companion bots.

It is all about responsibility — reflecting on AI ethics, the adult industry, and what it means to be a responsible technologist.

Discovering that schools in China have banned unsupervised break times, restricting students to classrooms — and why this matters.

Exploring how large language models and AI technologies are transforming video game development and player experiences.

Discussing how LLM hallucinations stem from exposure bias — both memorized knowledge and corpus-based heuristics from training data.

Notes on training transformers to perform both translation and alignment tasks simultaneously.
Notes on Compressive Transformers — a model that condenses old memories and stores them in compressed memory buffers for long-range sequence learning.

Notes from the NeurIPS 2018 paper on why certain models train more easily and generalize better, through loss landscape visualization.

Notes on Adaptive Computation Time for Recurrent Neural Networks — comparing additive vs multiplicative halting probability approaches.
A curated collection of resources about transformer models, including illustrated guides, GNN connections, and architectural improvements.
Examining how XLNet improved upon BERT's architecture with Two-Stream Self-Attention and bidirectional data input.
