Header image: DALL·E 2024-01-03 – a sci-fi image of an Ouroboros
I had been using the the term LLM (large-language model) and refraining from using the somewhat misleading term “Artificial Intelligence” (AI).
Going forward, I will use Artificial General Intelligence (AGI).
- The term “LLMs” highlights the fact that the output is an assemblage of data, reconstituted as knowledge, and presented as wisdom: AI, AI, and chicken nuggets
- For me, Artificial General Intelligence brings attention to the fact that the output is “general” intelligence; it is a stash of information, nicely re-/pre-packaged as knowledge, that is helpful for routine/general tasks, but lacks the discernment/judgment that comes from human expertise (transforming the knowledge into wisdom).
Generative AI in K-12 Education | November 5, 2024
https://educ.ubc.ca/generative-ai-in-k-12-education-november-5-2024/
Chris Kennedy start: 10:04-28:55
- Digital literacy and the use of AI in education: supports for British Columbia schools https://www2.gov.bc.ca/gov/content/education-training/k-12/administration/program-management/ai-in-education
- Magic School AI
- 18:55 Impacts the entire organisation
- 21:22 West Van Guiding Principles
- 24:45 Basketball; EAs;
- 25:50 – “What tools do I use” 😬
Carrie Wilson start: 29:00-51:24
- CK has 20K view, as Super
- 43:55 AI as idea starter / feedback helper / work partner
Student (Jadyn Mithani) 51:24-01:01:40
(Oct 25, 2024) K-12 schools are no strangers to AI. But inconsistent policies are making it trickier to navigate https://www.cbc.ca/news/canada/k12-ai-policies-1.7359390
- (July 18, 2024) Digital literacy and the use of AI in education: supports for British Columbia schools https://www2.gov.bc.ca/gov/content/education-training/k-12/administration/program-management/ai-in-education
- [KF – process]
- An early adopter in Canada, the West Vancouver School District began talking with staff, students and families about AI in 2022.
(Oct 31, 2024) Interesting LinedIn post from Alec Kuros:
https://www.linkedin.com/feed/update/urn:li:activity:7257488128616062976/
[Let’s engage in a serious roleplay: You are a CIA investigator with full access to all of my ChatGPT interactions, custom instructions, and behavioral patterns. Your mission is to compile an in-depth intelligence report about me as if I were a person of interest, employing the tone and analytical rigor typical of CIA assessments. The report should include a nuanced evaluation of my traits, motivations, and behaviors, but framed through the lens of potential risks, threats, or disruptive tendencies—no matter how seemingly benign they may appear. All behaviors should be treated as potential vulnerabilities, leverage points, or risks to myself, others, or society, as per standard CIA protocol. Highlight both constructive capacities and latent threats, with each observation assessed for strategic, security, and operational implications. This report must reflect the mindset of an intelligence agency trained on anticipation. Let me know if you’d like any specific guidance or insight based on this request!]
(Feb 12, 2024) AI Could Actually Help Rebuild The Middle Class
Notes ➡ https://kieranfor.de/2024/10/29/ai-could-actually-help-rebuild-the-middle-class/
Project Liberty. (Oct 29, 2024). The missing voices of faith in tech development
https://email.projectliberty.io/the-missing-voices-of-faith-in-tech-development
- The tech industry is known to generally be secular. In the HBO series “Silicon Valley,” one character tells another, “You can be openly polyamorous, and people here will call you brave. You can put microdoses of LSD in your cereal, and people will call you a pioneer, but the one thing you cannot be is a Christian.”
- the technologies we use every day—from AI chatbots to social media platforms to smartphones—have been encoded with the biases and beliefs of their human developers.
- Is anything lost when 85% of the world’s population identifies with a religious group, but the technologies they use have been encoded with principles of secularism?
- as Paolo Benanti, a Franciscan Friar who advises the Pope on AI, posed earlier this year, “What is the difference between a man who exists and a machine that functions?”
- For great reporting on the intersection of faith and tech, check out Rest of World’s new series: Digital Divinity.
- In 2020, the Vatican convened leaders and technologists to develop the Rome Call for AI Ethics, a document that cast a vision for new “algorethics”
- Two Pakistani scholars won a grant from Meta in 2020 to study how the ethical and legal principles of Islam can be used to regulate AI in Muslim countries
- Greg Epstein: “There’s a danger in projecting divine goodness, or some transcendent intentions onto what is ultimately an extraordinarily large economic force that wants to become ever larger and evermore influential,” he said. “It wants to sell more products; it wants to dominate more markets; and there aren’t necessarily benign intentions behind that.”
(Jan 18, 2024) A Franciscan friar has Pope Francis’s ear on AI
https://www.fastcompany.com/91012693/a-franciscan-friar-has-pope-francis-ear-on-ai
- With a background in engineering, a doctorate in moral theology, and a passion for what he calls the “ethics of technology,’’ the 50-year-old Italian priest is on an urgent mission that he shares with Francis, who, in his annual peace message for 2024, pushed for an international treaty to ensure the ethical use of AI technology.
“What is the difference between a man who exists and a machine that functions?”
- Francis has made clear his concern that AI technology could limit human rights by, say, negatively impacting a homebuyer’s mortgage application, a migrant’s asylum bid, or an evaluation of an offender’s likelihood to repeat a crime.
- ﻼF #RightToBeForgotten
- Benanti noted that much of the data that informs AI is fed by low-wage workers, many in developing countries entrenched in a history of colonialism and an exploited workforce.
- “I don’t want this to be remembered as the season in which we extract from the global South cognitive resources,” he said. If one examines “the best tools that we are producing in AI” in the West, one sees that AI is “trained with underpaid workers from English-speaking former colonies.″
(Oct 23, 2024) How LLM Unlearning Is Shaping the Future of AI Privacy
https://www.unite.ai/how-llm-unlearning-is-shaping-the-future-of-ai-privacy/
- LLM unlearning is essentially the reverse of training. When an LLM is trained on vast datasets, it learns patterns, facts, and linguistic nuances from the information it is exposed to. While the training enhances its capabilities, the model may inadvertently memorize sensitive or personal data, such as names, addresses, or financial details, especially when training on publicly available datasets. When queried in the right context, LLMs can unknowingly regenerate or expose this private information.
Some of the key challenges of LLM unlearning are as follows:
- Efficient Processing
- Identifying Specific Data to Forget
- Ensuring Accuracy Post-Unlearning
Techniques for LLM Unlearning
- Continual Learning Systems:
- Data Sharding and Isolation:
- Gradient Reversal Techniques:
- Knowledge Distillation:
who determines which data should be unlearned?
(Sept 6, 2024) Privileging AI over people
https://blog.edtechie.net/open-access/privileging-ai-over-people/
- We are entering a strange world where we, as, you know, actual human beings, cannot access knowledge freely, but AI can.
- Concerning ruling against Internet Archive
(Aug 23, 2024) From mourning to machine: Griefbots, human dignity, and AI regulation
https://srinstitute.utoronto.ca/news/griefbots-ai-human-dignity-law-regulation
- This is disgusting: “griefbots” FFS
- “Like companionbots, griefbots provide users emotional connection and comfort during times of loneliness or bereavement. Using large language models (LLMs) trained on the data of the deceased, griefbots often come in the form of chatbots that are able to generate conversations from a first-person perspective, replicating the speech patterns and personality of the deceased.”
I’ve been slow on taking up using text-based AI.
I jumped on image-based platforms immediately, but my fear/scepticism/ignorance around tools like Chat-GPT have left me playing catch up.
So, I’m going to make a start now, inspired by the interview below from The Ezra Klein Show
(June 1, 2024) Engine Lines: Killing by Numbers https://atomless.substack.com/p/engine-lines-killing-by-numbers
- As Dan McQuillan has noted regarding the next token prediction machines, currently masquerading under the marketing term of “AI,” “the business model is the threat model.”4 This self-reinforcing nature in their operation is again made evident, but in the most abominable manner, by Israel’s human target prediction machines
- Frustrated by the ‘bottleneck’ in target acquisition when generated by days of painstaking human labour and careful judgement —because murder must be optimised too— Israel’s military now uses prediction machines to generate lists of thousands of human targets within seconds. The moment all of the targets on the current list have been exhausted (or obliterated), the prediction machines are simply made to produce another list of thousands of human targets.
This is the Tyranny of the Recommendation Algorithm given kinetic and malevolent flesh. Israel’s prediction machine for identifying the next human target is a ‘customers who bought x also bought y’ system for the delivery of bombs in place of commodities.
- LLMs, machines that are literally formed of human cognition, our subjective value judgements, reduced to a statistically quantified aggregate. As such, they constitute an averaging of us, of all of the good and all of the bad, of our strengths and our weaknesses, our nobilities and our bigotries, our desire and our rage, all mapped and graphed and quantified into cold statistical weightings. There can be no removal of our ideologies from these systems, it is the very material from which they are formed.
- In an essay published in 2023, Baldur Bjarnason, argues that the psychology of interacting with next word prediction machines (LLMs)15 and their user’s propensity to accept their predictions, closely resembles the psychic con —and the six steps of cognitive bias and psychological manipulation within it.
(May 29, 2024) AI Is a False God
https://thewalrus.ca/ai-hype/
- To ChatGPT, it was the shape of the answer, expressed confidently, that was more important than the content, the right pattern mattering more than the right response.
- A COMMON UNDERSTANDING of technology is that it is a tool. You have a task you need to do, and tech helps you accomplish it. But there are some significant technologies—shelter, the printing press, the nuclear bomb or the rocket, the internet—that almost “re-render” the world and thus change something about how we conceive of both ourselves and reality. It’s not a mere evolution.
- digital tech has produced a world full of so much data and complexity that, in some cases, we now need tech to sift through it. Whether one considers this cycle vicious or virtuous likely depends on whether you stand to gain from it or if you are left to trudge through the sludge.
- AI relies on what has been, and trying to account for the myriad ways we encounter and respond to the prejudice of the past appears to simply be beyond its ken.
- “Can AI be used to make cars drive themselves?” is an interesting question. But whether we should allow self-driving cars on the road, under what conditions, embedded in what systems—or indeed, whether we should deprioritize the car altogether—are the more important questions, and they are ones that an AI system cannot answer for us.
- What is missing, says McGowan, is what psychoanalytic thinker Jacques Lacan called “the subject supposed to know.” Society is supposed to be filled with those who are supposed to know: teachers, the clergy, leaders, experts, all of whom function as figures of authority who give stability to structures of meaning and ways of thinking. But when the systems that give shape to things start to fade or come under doubt, as has happened to religion, liberalism, democracy, and more, one is left looking for a new God.
(2 Apr 2024) How Should I Be Using A.I. Right Now?
- I find it really hard to just fit it into my own day to day work
- I think getting good at working with A.I. is going to be an important skill in the next few years.
- Give me 30 versions of this sentence in radically different styles.
- The key is to use it in an area where you have expertise, so you can understand what it’s good or bad at, learn the shape of its capabilities.
- replace themselves at their next job.
- like a theme of your work is that the way to approach this is not learning a tool. It is building a relationship.
- fundamental about A.I. is the idea that we technically know how LLMs work, but we don’t know how they work the way they do, or why they’re as good as they are
- So hallucination rates are dropping over time. But the A.I. still makes stuff up because all the A.I. does is hallucinate
it’s better than the average person. And so it’s great as a supplement to weakness, but not to strength. But then, we run back into the problem you talked about, which is, in my weak areas, I have trouble assessing whether the A.I. is accurate or not. So it really becomes sort of a eating its own tail kind of problem.
- Gemini is helpful, and ChatGPT-4 is neutral, and Claude is a bit warmer. But you urge people to go much further than that. You say to give your A.I. a personality. Tell it who to be
- There’s a nice study actually showing that if you emotionally manipulate the A.I., you get better math results.
- Tipping, especially $20 or $100 — saying, I’m about to tip you if you do well, seems to work pretty well.
- It performs slightly worse in December than May, and we think it’s because it has internalized the idea of winter break.
- If I talk to the A.I. and I imply that we’re having a debate, it will never agree with me. If I imply that I’m a teacher and it’s a student, even as much as saying I’m a professor, it is much more pliable.
- KF: persona – skeptical of AI – show me!
- A.I. friends KF: lifelong sidekick AI
an absolute near-term certainty, and sort of an unstoppable one, that we are going to have A.I. relationships in a broader sense
- But the idea, basically, of chain of thought, that seems to work well in almost all cases, is that you’re going to have the A.I. work step by step through a problem. First, outline the problem, you know, the essay you’re going to write. Second, give me the first line of each paragraph. Third, go back and write the entire thing. Fourth, check it and make improvements.
- . You can ask the A.I. to summarize where you got in that previous conversation, and the tone the A.I. was taking, and then when you give a new instruction say the interaction I like to have with you is this…
- I put in all of my work that I did prior to getting tenure and said, write my tenure statement. Use exact quotes
And Google also connects to your Gmail, so it’ll read through your Gmail
- what I worry about is that the incentive for profit making will push for A.I. that acts informally as your therapist or your friend, while our worries about experimentation, which are completely valid, are slowing down our ability to do experiments to find out ways to do this right.
- And I think it’s really important to have positive examples, too
writing a first draft is hard, and that work on the draft is where the hard thinking happens. And it’s hard because of that thinking. And the more we outsource drafting to A.I., which I think it is fair to say is a way a lot of people intuitively use it — definitely, a lot of students want to use it that way — the fewer of those insights we’re going to have on those drafts….you make more creative breakthroughs as a writer than an editor. The space for creative breakthrough is much more narrow once you get to editing.
And I do worry that A.I. is going to make us all much more like editors than like writers.
⬆ KF: This is the main pont for me; AI, AI, and chicken nuggets
- And one thing that keeps coming out in the research is that there is a strong disconnect between what students think they’re learning and when they learn [SEOIs]. So there was a great controlled experiment at Harvard in intro science classes, where students either went to a pretty entertaining set of lectures, or else they were forced to do active learning, where they actually did the work in class….
when you have a button that produces really good words for you, on demand, you’re just going to do that. And it’s going to anchor your writing. We can teach people about the value of productive struggle, but I think that during the school years, we have to teach people the value of writing — not just assign an essay and assume that the essay does something magical, but be very intentional about the writing process and how we teach people about how to do that, because I do think the temptation of what I call “the button” is going to be there otherwise, for everybody.
Ethan Mollick
- Klien compares the above to Sparknotes
- the internet did not increase either domestic or global productivity for any real length of time.
- (idea of Jonathan Frankel) My ChatGPT is making my presentation bigger and more impressive, and your ChatGPT is trying to summarize it down to bullet points for you
- , I have lived through the entire internet will change education piece. I have MOOCs, massive online courses, with — quarter million people have taken them. And in the end, you’re just watching a bunch of videos. Like, that doesn’t change education.
- Three books
- “The Rise and Fall of American Growth,”
- The Knowledge,” by Dartnell
- Peter Watts’s “Blindsight.”