Keyboard Shortcuts?f

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • tShow transcript (+SHIFT = all transcript)
  • nShow notes (+SHIFT = all notes)

Please be cautious in using the transcripts.

They were created mechanically and have mostly not been checked or revised.

Here is how they were created:

  1. live lecture recorded;
  2. machine transcription of live recording;
  3. ask LLM to clean up transcript, and link to individual slides.

This is an error-prone process.

Click here and press the right key for the next slide.

(This may not work on mobile or ipad. You can try using chrome or firefox, but even that may fail. Sorry.)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

 

Notes and Slides

Useful to look back at how things have gone in areas where AI tools have been in use for longer.
See **unresolved xref to note:Z09n6E4M2YUpXduW5Uwuw** for more from the Magnus Carlsen interview.
Magnus Carlsen’s appearance on The Joe Rogan Experience 2275 (Feb 2025), where he describes how early adoption of neural-network chess engines in 2019 created a visible edge for players who used them and deepened their understanding of the game. In that segment he says,

‘after AlphaZero came out ... there was a period ... where you could very clearly see which players have been using these neural networks ...

And it just made us understand the game a lot better. ...

And that's also when I had maybe my best stretch of chess ever . Because I just understood these new things better than others.

Magnus Carlsen, February 2025

alien intelligence

It’s alien intelligence. Of course you want to understand what can do.
Alien intelligence nicely illustrated by Magnus Carlsen: it does not play chess like a human. That’s what creates the learning opportunity.
So frustrating: half of the philosophers encounter are poorly informed and won’t touch it, and the other half want it to write papers for them.
source = https://x.com/wtgowers/status/1984340182351634571
He’s talking about gpt5-pro, the same tool we can use for about £60/month.
(He used it to prove a claim he needed, which might otherwise have taken him around an hour.)

‘I crossed an interesting threshold yesterday, which I think many other mathematicians have been crossing recently ...

it looks as though we have entered the brief but enjoyable era where our research is greatly sped up by AI but AI still needs us’

Timothy Gowers, October 2025

What’s changed for me?

- slower writing

- record all talks/rehearsals

- more handwritten notes

- wider range of sources

- feed the machines

Most of what I write now is likely to be consumed via LLMs. So I need to write for the machine.
I started making an effort to get as much of my stuff as I can into training data.
These are all very pedestrian, basic things.
I’m most excited about searching new sources. Beats relying on journal rankings and familiar authors (tho those are still important, I think).

What do I find useful?

✘ writing content

✘ autocomplete

✘ summaries

✘ re-writing (? exploring)

I’m not sure but I think possibly asking for a re-written draft at a point where you want to try out a major change of direction can be helpful in helping you see whether it would likely work. But I have not had much experience yet.
✔ searching

- new sources

- things in my library

- where in this source is X?

✔ organizing

- audio + slides -> prose

- handwriting -> text

✔ checking drafts
drafts = my own and others (glossary is mainly for others)

- check draft vs source

- custom proofreading

- glossary w. quotes

- interpret cryptic reviews

'interpret cryptic reviews' = the odd comment can be really baffling. Asking the model to explain what the reviewer might have meant (giving options) can be helpful in the same way as discussing with a co-author.
[THIS IS THE MAIN POINT OF THE WHOLE THING]
I believe that the new technology is going to enable many of us to do much better research than we could before.
(This is probably especially true for those of us who, like me, are playing in the lower leagues.)
I want our department to be one where people make informed choices about this.
I don't have much to suggest because I do not know a lot, but I have helped some people with specific needs outside philosophy.
I am always happy to talk to you about your needs.

suggestions

frontier models only ($$$)

Different models have different strengths. (Simple example: gemini is best at long context (1M))

dramatic month-on-month improvements

If you haven't experimented in 2026 you do not know what is possible.

context

You need to feed the model all the information you would need to complete the task yourself.
(This links to not using consumer chat interfaces: they add a lot of context that is not relevant to your task. They also provide limited, if any, transparency about what context the model actually gets.)
(This also links to loops: for most tasks, it cannot take all the information at once)

loops > chats

Example of fact-checking a draft against sources. You create an example of what you want from fact checking the draft against one source, then ask the agent to loop over all the sources.
(If this gets out of hand, you can also get the agent to reduce the results into an annotated priority list.)
This is a major issue. Access by subscription to the best model from openAI costs at least £720/year (gpt5-pro w. business subscription), and the best model from anthropic is £900/year.
On top of this, tools for literature search tend to be £240/year each, and you get the best results by combining at least two.
(It’s also not clear that we want subscriptions since these do not always support the best tools for research: they’re mainly focussed on either office work or coding. But API access outside of subscriptions is likely even more expensive.)

- research funds (?)

- aistudio.google.com

You cannot do serious work with aistudio unless you are prepared to copy and paste material like a cave person (slow and error prone), but it is great for giving you a sense of what is possible.

- perplexity free year

- github pro

Highlight github pro: very limited access to frontier models, with limited context, but maybe worth exploring before paying if you do not have large funds.
Most books date quickly. This is a bit old but maybe still useful.
(i) Mixed feelings; (ii) what intially gave Carlsen an advantage rapidly became required for all serious players.

‘I rarely play against engines at all because they just make me feel so stupid and useless. ...

So, I don't know, I haven’t found it particularly useful’

Carlsen (2025)

‘the neural nets have improved our understanding of the game immensely ...

I still see some people allowing these pawn advances and
I wonder if they didn’t learn their lesson from from 2019.’

appendix

[Do not include: if this gets in, it’s all anyone will talk about. My focus should be on getting people talking about practical ways to advance their own research]
https://joshuagans.substack.com/p/reflections-on-vibe-researching
https://en.wikipedia.org/wiki/Joshua_Gans for photo
This to me is an example of how it is going wrong, although it is also there because coding assistants can now write papers that are publishable in mid-tier journals.
Spent a year spamming journals with AI generated papers, many were published in mid-tier journals and some were reviewed at top-tier journals.

workslop

‘... Interestingly, it was rare for more than one referee to pick up on this, but thankfully, there was always one.’

Joshua Gans, January 2026