Notepad of ideas I would like to spend more time on at some point:
How to best learn quantum mechanics
Why is nature quantized: Chapter 1 in Quantum Cookbook
What is the nature of using discrete quanta & combinatorics: Susskind book
Good examples for calculating quantum problems: Schwichtenberg book later chapters
Six not-so-easy pieces (what was good there)
Quantum computing: Dancing with Qbits
Create a full list of quantum mechanics books
Different ways in which different books teach quantum mechanics
Flow chart from Schwichtenberg book, then create my own for other books
- Asserts discrete quantum framework, including kets, bras, operators
- Replaces sums with integrals to make the framework continuous
- Derives Schroedinger from the continuous framework by applying time change
- Introduces momentum operator by using Noether’s theorem – but the actual momentum operator isn’t derived anywhere, just asserted
- Asserts Schroedinger equation
- Derives continuous operators from it
- Derives discrete quantum framework from spin
- Moves it into continuous by replacing sums with integrals
- Derives Schroedinger from the continuous framework by applying time change
- Not sure how momentum operator comes in
Drawing with entirely local information
A circle is a nominal emergence from a set of points, meaning that a point cannot have the property of being a circle.
But, is there a strictly local rule that would let points assemble into a circle? Each point draws the next point based only on its own information.
Now, can we train those local rules using a neural network that optimizes for the global property of a circle?
Create 4D game: rotate so you move in 4D direction (always use isometric 3D)
GPT-3 startup advisor
Use GPT-3 to create startup counseling based on Paul quotes from the Bible
Utility of web3 using the example of healthcare contract execution
As a study of oracle issues on the block chain, like Gaussian copula
Healthcare cost inflation over the decades
Show different cycles, including HMO in 1990s
Paint editor to write stories using GPT-3
Draw a picture with topics and valence (or somehow create a configuration language) which then drives GPT-3 to turn it into a story. Need to distinguish between valence and plot (a surprising story will give hints to various sub-plots but not resolve them until later).
Two dimensions in Computational Story Lab’s research: (1) power vs. weakness, (2) safety vs. danger.
Stories mostly revolve around characters, but they could also revolve around systems. So could draw characters into the storyline, define them, then generate the story.
Set up adjacency matrix for all characters, or eigenvector from which to randomly derive the adjacency matrix (Andrew Beveridge, Jie Shan on Game of Thrones), and create character interactions that way
Use GPT-3 Codex to write computer games (plus tutorial for kids)
Chess that can be easily reconfigured
Program that codes itself (list of functions that it can add to, or function that codes)
Text adventure (with parser)
Language information density
My work on Chinese written language information density, and visual density
Using crypto to own unique algorithms
The issue with NFTs is that they are just server addresses on the blockchain. The server renders the image, no matter who looks at it. You’re not really owning anything unique at all.
The idea here is to actually be the only person in the world who can execute a particular algorithm, or own a particular piece of content. This could be accomplished by storing the image data itself on the blockchain, but that’s also lame because everyone could see it.
It would be even better if you were the only one able to render it. This could be done by decomposing an image into wavelets, and then having a blockchain where each node is responsible for rendering a particular wavelet, and the blockchain as a whole renders your image.
Another way could be to use a distributed GAN: each node is responsible for a particular area of neurons. The whole network renders the image, like a GAN does.
Finally, how about a code that traverses a network from node to node, but only when you get to a particular node can that node decode the code, and know which node to route to next?
Another approach. There are two distributed networks: (1) A network that covers an n-dimensional space, and where you can own a particular hypercube in that n-dimensional space. (2) A network that encodes a “world generator”, which is a GAN that runs in a distributed way. Any hypercube in the ownership network (1) is an input string that can get run through the generator network (2). But the generator network will only accept inputs (through some kind of cryptography) from the ownership network. So only someone who owns a particular input sequence in the ownership network can actually ask the generator network to create that content. To make this even more interesting, the generator network’s output can depend on time, which means that no two renderings are ever the same – but the only one who can request a rendering is the person whose ownership is encoded in the ownership network.
Crazier approach: you can own neurons in the generator network. Someone could buy the neurons that encode, say, the eyes of a dog (in an image classification network), and could charge rent for using those neurons.
Mathematical magic tricks
Using mathematical shape formulas to ask people to magically draw shapes, without their knowing, by crowdsourcing the calculations
Do something with Fermat: x ^ p % p = x where p > x and p is a prime
My previous magic tricks:
- Draw Albert Einstein after injecting Ace (card trick) + Bird (?) + 1 (math trick) + Stone (put in pocket)
- Chinese letters hidden in board, then use Google Translate
New logographic language
Esperanto is an attempt to create a simplified spoken language that is easy to learn. Do the same, but for written language. This should take the form of a logographic language (like Mandarin). One comparison would be emoji: each emoji is fairly easy to interpret in its meaning, but there is no overarching structure or way to minimize misunderstanding of meaning. So this new logographic language would need to be:
- Precise in its meaning and interpretation
- Easy to interpret from just looking at it
- Compress as much information as possible into as few characters as possible
- Which would probably require some simple rules for combining characters into super-characters, and having some particles to inflect meaning
The impact of IQ on the progress of science
Create a toy model that counts the number of people with IQ > 180 at any given point in time (like John von Neumann). What % of them deliver some big breakthrough in our scientific understanding? That would give us an idea of how much scientific progress would get boosted if we were able to create artificial intelligence with IQ over that level.
Model organizational problems as agent problems
There are too many business and management theory books that rely just on studying anecdotes. Instead, define simple agent models with very clear input parameters to model certain effects. For example, we should be able to model the effect of more diversity in idea generation. Then write a business book that models prevailing organizational theories using these agent models.
Facebook paper on compressing images with AI
Oct 2022 Facebook paper uses AI to get 10x JPEG compression for images. Not that surprising, you can look at the trained network as a very long code book. But that code book could be used to extract the foundational meaning primitives of language. And those could be used to model a pictographs language. 1) Use the Mandarin information density per pixel idea from above to calculate average pixel character size in a fully defined language, 2) divide network size by that to get the number of language primitives necessary to fully encode text.
Sid Meier could have been Ray Dalio
In 1994, I spent a summer trying to program a simulation of the Iraq-Kurdish conflict at the time. My idea was to build a rule base that you could parameterize to simulate the outcomes of the conflict. I just recently remembered that and thought “that was really smart, I could have invented rule-based global macro trading”. But in reality, in my specific context of the time, it wasn’t particularly brilliant. Anyone who played video games at the time was very well acquainted with rule-based engines, because every simulation game was one, whether Sim City, Civilization (1991) or even Defender of the Crown (1980s). But the funny thing in hindsight is that there probably weren’t many hedge fund traders at the time who paid attention to this. Otherwise Bridgewater and others wouldn’t have been such a big deal. LTCM, which was very global macro, vaguely rules based and soon very bankrupt, wasn’t big until 1997. I’m sure Sid Meier’s 1991 Civilization algorithms were smarter than Bridgewater’s until at least 2005 or so.
Which species has had the biggest impact on the universe
The obvious answer is “humans”, but is that really the answer? First, how would you even define “impact”? It seems like it would need to be defined through the greatest reduction in entropy, i.e., the greatest imposing of order. Isn’t it possible that much simpler organisms have created more reduction in entropy than humans? (There is a theory that the bacteria in our gut actually control us, and that they collectively form a super intelligence that really controls what’s going on, and we’re just the execution arm…)
One way to study this could be: if Stephen Wolfram’s theories on cellular automata are applicable here, then what is the cellular rule that creates the highest degree of order, as it executes? And can we somehow boil down human and other life forms’ activity to such a fundamental rule?