Skip to content

Speedprior/AI-links

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 

Repository files navigation

AI, optimization processes, and the design-space of minds-in-general

What is Intelligence:

https://srconstantin.wordpress.com/2019/02/25/humans-who-are-not-concentrating-are-not-general-intelligences/

How is Artificial Intelligence (in general) different from Human Intelligence:

https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general

Artificial Neural Networks are just a bunch of matrix multiplications? So is the standard model of particle physics. Iterate either one of them long enough, and you might get intelligence:

https://en.wikipedia.org/wiki/Operator_(physics)#Operators_in_matrix_mechanics

Also, matrix multiplications are linear, but Neural Nets can represent nonlinear functions. How does that work? The activation function for each neuron, which must be nonlinear:

https://towardsdatascience.com/how-to-choose-the-right-activation-function-for-neural-networks-3941ff0e6f9c https://www.pinecone.io/learn/train-sentence-transformers-softmax/

Deep Learning, Transformers, and Large Language Models

It took humanity 80 years but we have finally invented computer security that can be bypassed with polite but firm insistence:

https://twitter.com/ESYudkowsky/status/1598663598490136576

AI Art:

https://octoml.ai/blog/from-gans-to-stable-diffusion-the-history-hype-and-promise-of-generative-ai/

AI Games:

https://medium.com/intuitionmachine/the-strange-loop-in-alphago-zeros-self-play-6e3274fcdd9f

When you talk to ChatGPT, you're talking to RLHF (Reinforcement Learning with Human Feedback, ChatGPT), on top of Supervised Fine-Tuning (Text-Davinci-003), on top of the actual Unsupervised Transformer Model (GPT 3) which is the giant inscrutable mass of neural weights. You can visualize it like this:

https://twitter.com/anthrupad/status/1626113680340566018

OpenAI made some cool stuff

Try out ChatGPT for the wow factor:

https://chat.openai.com/chat

Get under the hood with different fine-tunings of GPT 3:

https://platform.openai.com/playground

You can make cool stuff too!

Training a neural net means using something like Stochastic Gradient Descent to reduce the loss, as measured by the cost function. You can do a small version, yourself:

https://realpython.com/gradient-descent-algorithm-python/ https://machinelearningmastery.com/difference-between-backpropagation-and-stochastic-gradient-descent/

If you want to do real deep learning, yourself, don't worry about GPU scalpers. Just use Google Colab, for free:

https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial2/Introduction_to_PyTorch.ipynb

The deep learning breakthrough that specifically led to BERT and GPT is called the Transformer model. The paper that introduced it:

https://arxiv.org/abs/1706.03762

image generation can look sometjing like this:

https://huggingface.co/blog/annotated-diffusion

Interpretability

Local Interpretable Model-agnostic Explanations

https://homes.cs.washington.edu/~marcotcr/blog/lime/

SHAP (based on the Shapley Value, from economics)--how much did each feature contribute to the classification?:

https://www.aidancooper.co.uk/a-non-technical-guide-to-interpreting-shap-analyses/

ELK (Eliciting Latent Knowledge)--getting a model to truthfully tell you what it's thinking:

https://www.alignmentforum.org/posts/rxoBY9CMkqDsHt25t/eliciting-latent-knowledge-elk-distillation-summary

Neel Nanda (This guy does a lot of interpretability research):

https://twitter.com/neelnanda5

AI Safety, AI Interpretability, AI Architecture, Generative AI, tons of other stuff:

https://gwern.net

What's next for AI?

See the cutting-edge (until at least next week) LLM with visual perception, from Microsoft, Kosmos-1:

https://arxiv.org/abs/2302.14045

Find out why AI is definitely going to kill us all:

https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

You might think it won't, for one of these common reasons for skepticism, but they're all wrong:

https://www.researchgate.net/publication/368685319_AI_Risk_Skepticism_-A_Comprehensive_Survey

The selection of models via SGD has been referred to as "summoning a demon," but actually, it's much worse than that: You're summoning a demon-summoning demon:

https://arxiv.org/abs/1906.01820

Is this loss?

https://twitter.com/neelnanda5/status/1616590960066203648

About

For DC727

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors