I'm addicted to caffeine, I watch TV shows (but not TV), and I'm on lambdamoo.
14 stories
·
1 follower

AI Is Speaking In Tongues

1 Share
AI Is Speaking In Tongues

In my hometown in Ohio, church membership was a given for middle-class people. With a population of 8,000 people, somehow 19 churches were kept open and running. A big part of your social fabric were the kids who went to the same church that you did, the people you would gravitate towards in a new social situation. The more rural and working class your family was, the less likely you actually went to a church on a regular basis. You'd see them on Christmas and Easter, but they weren't really part of the "church group".

My friend Mark was in this group. His family lived in an underground house next to their former motorcycle shop that had closed down when his dad died. It was an hour walk from my house to his through a beautiful forest filled with deer. I was often stuck within eyesight of the house as a freight train slowly rolled through in front of me, sitting on a tree stump until the train passed. The former motorcycle shop had a Dr. Pepper machine in front of the soaped windows I had a key to and we'd sneak in to explore the shop on sleepovers while his mom worked one of her many jobs.

AI Is Speaking In Tongues
What I mean when I say "underground house"

She was a chain-smoker who rarely spoke, often lighting one cigarette with the burning cherry of the first as she drove us to the local video store to rent a videogame. Mark's older brother also lived in the house, but rarely left his room. Mostly it was the two of us bouncing around unsupervised, watching old movies and drinking way too much soda as his German Shepard wheezed and coughed in the cigarette smoke filled house.

Families like this were often targeted by Evangelical Christians, groups that often tried to lure families in with offers of youth groups that could entertain your kids while you worked. At some point, one of the youth pastors convinced Mark's mom that she should send both of us to his church. Instead of Blockbuster, we got dropped off in front of an anonymous steel warehouse structure with a cross and a buzzing overhead light on a dark country road surrounded by cornfields. Mark hadn't really been exposed to religion, with his father and mother having been deep into biker culture. I had already seen these types of places before and was dreading what I knew came next.

When we walked in, we were introduced to "Pastor Michael", who looked a bit like if Santa Claus went on an extreme diet and bought serial killer glasses. Mark bounced over and started asking him questions, but I kept my distance. I had volunteered the year before to fix up an old train station which involved a large crew of youth "supervised" by the fundamentalist Christian church that wanted to turn the train station into a homeless shelter. We slept on the floors of the middle school in this smaller town neighboring mine, spending our days stripping paint and sanding floors and our evenings being lectured about the evils of sex and how America was in a "culture war". In retrospect I feel like there should have been more protective gear in removing lead paint as child labor, but I guess that was up to God.

After one of these long sessions where we were made to stand up and promise we wouldn't have sex before we got married, I made a joke in the walk back to our assigned classroom and was immediately set upon by the senior boy in the group. He had a military style haircut and he threw me against a locker hard enough that I saw stars. I had grown up going to Catholic school and mass every Sunday, spending my Wednesday nights going to CCD (Confraternity of Christian Doctrine), which was like Sunday school for Catholics. All of this was to say I had pretty established "Christian" credentials. This boy let me know he thought I was a bad influence, a fake Christian and that I should be careful since I'd be alone with him and his friends every night in the locked classroom. The experience had left me extremely wary of these Evangelical cults as I laid silently in my sleeping bag on the floor of a classroom, listening to a hamster running in a wheel that had clearly been forgotten.

To those of you not familiar with this world, allow me to provide some context. My Catholic education presented a very different relationship with holy figures. God spoke directly to very few people, saints mostly. There were many warnings growing up about not falling into the trap of believing you were such a person, worthy of a vision or a direct conversation with a deity. It was suggested softly this was more likely mental illness than divine intervention. He enters your heart and changes your behavior and gives you peace, but you aren't in that echelon of rare individuals for whom a chat was justified. So to me these Evangelicals claiming they could speak directly with God was heresy, a gross blasphemy where random "Pastors" were claiming they were saints.

The congregation started to file in and what would follow was one of the most surreal two hours of my life. People I knew, the woman who worked at the library and a local postal worker started to scream and wave their arms, blaming their health issues on Satan. Then at one point Scary Santa Claus started to shake and jerk, looking a bit like he was having a seizure. He started to babble loudly, moving around the room and I stared as more and more people seemed to pretend this babbling meant something and then doing it themselves. The bright lights and blaring music seemed to have worked these normal people into madness.

In this era before cell phones, there wasn't much I could do to leave the situation. I waited and watched as Mark became convinced these people were channeling the voice of God. "It's amazing, I really felt something in there, there was an energy in the room!" he whispered to me as I kept my eyes on the door. One of the youth pastors asked me if I felt the spirit moving through me, that I shouldn't resist the urge to join in. I muttered that I was ok and said I had to go to the bathroom, then waited in the stall until the service wrapped up almost two hours later.

In talking to the other kids, I couldn't wrap my mind around the reality that they believed this. "It's the language of God, only a select few can understand what the Holy Spirit is saying through us". The language was all powerful, allowing the Pastor to reveal prophesy to select members of the Church and assist them with their financial investments. This was a deadly serious business that these normal people completely believed, convinced this nonsense jabbering that would sometimes kind of sound like language was literally God talking through them.

I left confident that normal, rational people would never believe such nonsense. These people were gullible and once I got out of this dead town I'd never have to be subjected to this level of delusion. So imagine my surprise when, years later, I'm sitting in an giant conference hall in San Francisco as the CEO of Google explains to the crowd how AI is the future. This system that stitched together random words was going to replace all of us in the crowd, solve global warming, change every job. This was met with thunderous applause by the group, apparently excited to lose their health insurance. All this had been kicked off with techno music and bright lights, a church service with a bigger budget.

Every meeting I went to was filled with people ecstatic about the possibility of replacing staff with this divine text generator. A French venture capitalist who shared a Uber with me to the original Google campus for meetings was nearly breathless with excitement. "Soon we might not even need programmers to launch a startup! Just a founder and their ideas getting out to market as fast as they can dream it." I was tempted to comment that it seemed more likely I could replace him with an LLM, but it felt mean. "It is going to change the world" he muttered as we sat in a Tesla still being driven by a human.

It has often been suggested by religious people in my life that my community, the nonreligious tech enthusiasts, use technology as a replacement for religion. We reject the fantastical concept of gods and saints only to replace them with delusional ideas of the future. Self-driving cars were inevitable until it became clear that the problem was actually too hard and we quietly stopped talking about it. Establishing a colony on Mars is often discussed as if it is "soon", even if the idea of doing so far outstrips what we're capable of doing by a factor of 10. We tried to replace paper money with a digital currency and managed to create a global Ponzi scheme that accelerated the destruction of the Earth.

Typically I reject this logic. Technology, for its many faults, also produces a lot of things with actual benefits which is not a claim religion can make most of the time. But after months of hearing this blind faith in the power of AI, the comparisons between what I was hearing now and what the faithful had said to me after that service was eerily similar. Is this just a mass delusion, a desperate attempt by tech companies to convince us they are still worth a trillion dollars even though they have no new ideas? Is there anything here?

Glossolalia

Glossolalia, the technical term for speaking in tongues, is an old tradition with a more modern revival. It is a trademark of the Pentecostal Church, usually surrounded by loud music, screaming prayers and a leader trying to whip the crowd into a frenzy. Until the late 1950s it was confined to a few extreme groups, but since then has grown into a more and more common fixture in the US. The cultural interpretation of this trend presents it as a “heavenly language of the spirit” accessible only to the gifted ones. Glossolalists often report an intentional or spontaneous suspension of will to convey divine messages and prophecies.

In the early 1900s W. J. Seymour, a minister in the US, started to popularize the practice of whipping his congregation into a frenzy such that they could speak in tongues. This was in Los Angeles and quickly became the center of the movement. For those who felt disconnected from religion in a transplant city, it must have been quite the experience to feel your deity speaking directly through you.

It's Biblical basis is flimsy at best however. Joel 2:28-9 says:

And afterwards I will pour out my Spirit on all people. Your sons and daughters will prophesy, your old men will dream dreams, young men will see visions. Even on my servants, both men and women, I will pour out my Spirit in those days.

A lot of research has been done into whether this speech is a "language", with fascinating results. In the Psychology of Speaking in Tongues, Kildahl and Qualben attempted to figure that out. Their conclusions were that while it could sound like a language, it was a gibberish, closer to the fake language children use to practice the sounds of speaking. To believers though, this presented no problems.

He argued that glossolalia is real and that it is a gift from the Holy Spirit. He argued that a person cannot fake tongues. Tongues are an initial evidence of the spirit baptism. It is a spiritual experience. He observed that tongues cannot be understood by ordinary people. They can only be understood spiritually. He noted that when he speaks in tongues he feels out of himself. The feeling is very strange. One can cry, get excited, and laugh. As our respondent prayed, he uttered: Hiro---shi---shi---sha---a---karasha. He jumped and clapped his hands in excitement and charisma. He observed that if God allows a believer to speak in tongues there is a purpose for that. One can speak in tongues and interpret them at the same time. However, in his church there is no one who can interpret tongues. According to our respondent, tongues are intended to edify a person. Tongues are beneficial to the person who speaks in tongues. A person does not choose to pray in tongues. Tongues come through the Spirit of God. When speaking in tongues, it feels as if one has lost one's memory. It is as if one is drunk and the person seems to be psychologically disturbed. This is because of the power of the influence of the Holy Spirit. Tongues are a special visitation symbolising a further special touch of the Holy Spirit. Source

In reality glossolalic speech is not a random and disorganized production of sounds. It has specific accents, intonations and word-like units that resembles the original language of the speaker. [source] That doesn't make it language though, even if the words leave the speaker feeling warm and happy.

What it actually is, at its core, is another tool the Evangelical machine has at its disposal to use the power of music and group suggestion to work people into a frenzy.

The tongue-speaker temporarily discards some of his or her ego functioning as it happens in such times as in sleep or in sexual intercourse.41 This phenomenon was also noticed in 2006 at the University of Pennsylvania, USA, by researchers under the direction of Andrew Newburg, MD who completed the world's first brain-scan study of a group of Pentecostal practitioners while they were speaking in tongues. The researchers noticed that when the participants were engaged in glossolalia, activity in the language centres of the brain actually decreased, while activity in the emotional centres of the brain increased. The fact that the researchers observed no changes in any language areas, led them to conclude that this phenomenon suggests that glossolalia is not associated with usual language function or usage.

Source

It's the power of suggestion. You are in a group of people and someone, maybe a plant, kicks it off. You are encouraged to join in and watch as your peers enthusiastically get involved. The experience has been explained as positive, so of course you remember it as a positive experience, even if in your core you understand that you weren't "channeling voices". You can intellectually know it is a fake and still feel moved by the experience.

LLMs

LLMs, large language models, which have been rebranded as AI, share a lot with the Evangelical tool. AI was classically understood to be a true artificial intelligence, a thinking machine that actually processed and understood your request. It was seen as the Holy Grail of computer science, the ability to take the best of human intellect and combine it into an eternal machine that could guide and help us. This definition has leaked from the sphere of technology and now solidly lives on in Science Fiction, the talking robot who can help and assist the humans tasked with something.

If that's the positive spin, then there has always been a counter argument. Known as the "Chinese room argument" as shorthand, it says that a digital computer running code cannot have a mind, understanding or consciousness. You can create a very convincing fake though. The thought experiment is as follows:

You've made a computer that behaves as if it understands Chinese. It takes Chinese characters as input and returns Chinese characters as output. It does so at such a high level that it passes the Turing test in that a native Chinese speaker believes the thing it is speaking to is a human being speaking Chinese. But the distinction is that the machine doesn't understand Chinese, it is simulating the idea of speaking Chinese.

Searle suggests if you put him into a room with an English version of the program he could receive the same characters through a slot in the door, process them according to the code and produce Chinese characters as output, without understanding anything that is being said. However he still wouldn't speak or understand Chinese.

This topic has been discussed at length by experts, so if you are interested in the counterarguments I suggest the great site by Stanford: https://plato.stanford.edu/entries/chinese-room/

What Is An AI?

The Neural Networks powering AI at a high level look as follows:

AI Is Speaking In Tongues

The magic part of AI is the Transformer neural network, which uses self-attention to process not just the elements of text on their own but the way the data relates to each other. It has been fed enough examples to be able to take the input and produce a likely output. Effectively by ingesting every work spoken by humans on video or written on the internet, it can predict with some level of accuracy what a common collection of words to return to a query is.

The end result is a box that consumes a frightening amount of electricity to mimic the responses of an all-knowing eternal AI. The believers can commune directly with this machine spirit, asking it questions and getting back blocks of text or pictures that sometimes look like something a human would make. You cannot question the model or work backwards to determine how specifically it came to the conclusion it did. There is an element of the divine in the process, a specter of magic.

Hallucination

As AI was introduced, we were cautioned to be on the lookout for "hallucinations", where the model would produce convincing-looking fakes. The problem with this warning is it both flies in the face of how humans work (we're not amazing at questioning authoritative-sounding answers) and also suggested this was a temporary problem. But the more you read about AI, you realize it can never solve the hallucination problem because it would need to understand what is "real" and "fake" in order to do that.

What do I mean on a practical level? So I thought I would test out the Google Gemini AI with some easy questions. I had recently written a bash script to query GCP cloud deployments to get their status and write them to markdown. This should be an easy test of the system, right? It's Google AI writing a simple script using the Google CLI to interact with the Google Cloud Platform. Closed ecosystem, high degree of confident, should have definite sources of truth.

Can you write a bash script using the gcloud CLI to query the status of all deployments using cloud deploy, a GCP service, in the GCP project test123 and output the name and current name to a markdown file for each one?
#!/bin/bash

# Set project ID
PROJECT_ID="test123"

# Output directory for markdown files
OUTPUT_DIR="deployment_statuses"

# Create output directory if it doesn't exist
mkdir -p "$OUTPUT_DIR"

# Get all deployments
deployments=$(gcloud deploy deployments list --project "$PROJECT_ID" --format="value(NAME)")

# Loop through each deployment
for deployment in $deployments; do
  # Get deployment details
  deployment_details=$(gcloud deploy deployments describe "$deployment" --project "$PROJECT_ID" --format="yaml")

  # Parse name and state from YAML output
  name=$(echo "$deployment_details" | yq .name)
  state=$(echo "$deployment_details" | yq .status.state)

  # Create markdown file for deployment
  output_file="$OUTPUT_DIR/$name.md"
  echo "# Deployment: $name" > "$output_file"
  echo "Current State: $state" >> "$output_file"

  # Add additional information to markdown file as needed (e.g., rollout strategy, create time)
  # ...

done

echo "Deployment statuses written to markdown files in $OUTPUT_DIR"

Looks great right? Exactly what I wanted. Except it doesn't work.

gcloud doesn't have a gcloud deploy deployments argument.

NAME
    gcloud deploy - create and manage Cloud Deploy resources

SYNOPSIS
    gcloud deploy GROUP | COMMAND [GCLOUD_WIDE_FLAG ...]

DESCRIPTION
    Create and manage Cloud Deploy resources.

GCLOUD WIDE FLAGS
    These flags are available to all commands: --help.

    Run $ gcloud help for details.

GROUPS
    GROUP is one of the following:

     automation-runs
        Manages AutomationRuns resources for Cloud Deploy.

     automations
        Manages Automations resources for Cloud Deploy.

     custom-target-types
        Create and manage Custom Target Type resources for Cloud Deploy.

     delivery-pipelines
        Create and manage Delivery Pipeline resources for Cloud Deploy.

     job-runs
        Manages job runs resources for Cloud Deploy.

     releases
        Create and manage Release resources for Cloud Deploy.

     rollouts
        Create and manage Rollout resources for Cloud Deploy.

     targets
        Create and manage Target resources for Cloud Deploy.

Now I know this because I wrote it, but there's no way someone looking at this with no knowledge of the gcloud CLI would understand why this wouldn't work.

I tried again, this time with a simpler question. Maybe because not that many people use gcloud it doesn't have a big enough sample size to construct a real answer. Let's ask a more basic question, but let's keep it on Google technology. Kubernetes has been documented to death, books written about it, millions of articles and blog posts. Certainly that will work.

How do I, in hosted Kubernetes on GCP, write a deployment configuration yaml which sets nodeAffinity using preferredDuringSchedulingIgnoredDuringExecution. The nodepool should be nodes02 with a weight of one.

What I'm trying to prompt it here is to give me back a configuration file not with nodeSelector but with Node Affinity, the softer preference level for assigning pods to nodes. But there's a small trick in the question. I want a deployment not a Pod spec. This is a distinction which requires a more nuanced comprehension of the subject matter, the ability to not pull the most common example but the more specific example. What we want is this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template: 
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
          preference:
            matchExpressions:
            - key: cloud.google.com/gke-nodepool
            operator: In
            values:
            - nodes02
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

What I got back was this:

  spec:
    nodeSelector: 
      cloud.google.com/gke-nodepool: nodes02
    affinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        matchExpressions:
        - key: cloud.google.com/gke-nodepool
          operator: In
          values:
          - nodes02

The problem with this response is it does a very different thing from the thing I was trying to do. nodeSelector is a stricter approach, ensuring pods are only scheduled on nodes that match the label. nodeAffinity is a much softer preference, telling k8s I'd like the pods to go there if possible but if that isn't possible, put them where you would normally do.

Both of these examples seem reasonable. The machine responded with something that could be construed as the answer, a clever parody of human intelligence, but ultimately it is more like a child playing. It doesn't understand the question, but understands how to construct convincing looking fakes.

To the faithful though, this isn't a problem.

However, if the training data is incomplete or biased, the AI model may learn incorrect patterns. This can lead to the AI model making incorrect predictions, or hallucinating.
For example, an AI model that is trained on a dataset of medical images may learn to identify cancer cells. However, if the dataset does not include any images of healthy tissue, the AI model may incorrectly predict that healthy tissue is cancerous. This is an example of an AI hallucination.

[Source]

This creates a false belief that the problem lies with the training data, which for both of my examples simply cannot be true. Google controls both ends of that equation and can very confidently "ground" the model with verifiable sources of information. In theory this should tether their output and reduce the chances of inventing content. It reeks of a religious leader claiming while that prophecy was false, the next one will be real if you believe hard enough. It also moves the responsibility for the problem from the AI model to "the training data", which for these LLMs represents a black box of information. I don't know what the training data is, so I can't question whether its good or bad.

Is There Anything Here?

Now that isn't to say there isn't amazing work happening here. LLMs can do some fascinating things and the transformer work has the promise to change how we allow people to interact with computers. Instead of an HTML form with strict validation and obtuse error messages, we can instead help explain to people in real-time what is happening, how to fix problems, just in general provide more flexibility when dealing with human inputs. We can have a machine look at less-sorted data and find patterns, there are lots of ways for this tech to make meaningful differences in human life.

It just doesn't have a trillion dollars worth of value. This isn't a magic machine that will replace all human workers, which for some modern executives is the same as being able to talk directly to God in terms of the Holy Grail of human progress. Finally all the money can flow directly to the CEO himself, cutting out all those annoying middle steps. The demand of investors for these companies to produce something new has outstripped their ability to do that, resulting in a dangerous technology being unleashed upon the world with no safeties. We've made a lying machine that doesn't show you its work, making it even harder for people to tell truth from fiction.

If LLMs are going to turn into actual AI, we're still years and years from that happening. This represents an interesting trick, a feel-good exercise that, unless you look too closely, seems like you are actually talking to an immortal all-knowing being that lives in the clouds. But just like everything else, if your faith is shaken for even a moment the illusion collapses.

Questions/comments/concerns: https://c.im/@matdevdug



Read the whole story
faried
146 days ago
reply
Lost in the Valley of Pleasure
Share this story
Delete

My book, Algorithms and Misinformation

1 Comment
Misinformation and disinformation are the biggest problems on the internet.

To solve a problem, you need to understand the problem. In Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It, I claim that the problem is not that misinformation exists, but that so many people see it. I explain why algorithms amplify scams and propaganda, how it easily can happen unintentionally, and offer solutions.

You can read much of the book for free. If you want a single article summary, this overview describes the entire book: If you are interested in what you might get from skimming the book, you might be interested in a bit more: If you want part of what you might get from reading the entire book, you may want all the excerpts: I wanted this book to be a part of the debate on how to solve misinformation and disinformation on the internet. This book offers some practical solutions. It was intended to be an essential part of the discussion about viable solutions to what has become one of the biggest problems of our time.

I wrote, developed, and edited this book over four years. It was under contract with two agents for a year but was never accepted by a publisher. The book will not be published. The full manuscript had many more examples, interviews, and stories, but you can get some of what you would have gotten by reading the book by reading all the excerpts above.

Some might want to jump straight to ideas for solutions. I think solutions depend on who you are.

For those inside of tech companies, because it's easy for executives to unintentionally cause search and recommendations to amplify scams, it's important for everyone to question what algorithms are optimized for and make sure they point toward the long-term growth of the company.

For the average person, because the book shows companies actually make more money when they don't allow their algorithms to promote scams, this book gives hope that complaining about scammy products and stopping use of those products will change the internet we use every day.

For policy makers, because it's hard to regulate AI but easy to regulate what they already know how to regulate, this book claims they should focus on regulating scammy advertising since that funds misinformation, then ramp up antitrust efforts to increase consumers' ability to switch to products that haven't been enshittified and further raise long-term costs on companies that enshittify their products.

Why these are the solutions requires exploring the problem. The problem is not that misinformation exists, but that people see misinformation and disinformation. The goal should be to reduce it to nuisance levels.

Through stories, examples, and research, this book showed why so many people see misinformation and disinformation, that it is often unintentional, and that it doesn't maximize revenue for companies. Understanding why we see so much misinformation is the key to coming up with practical solutions.

I hope others find this useful. If you do, please let me know.
Read the whole story
faried
270 days ago
reply
toc of a book that will not be published
Lost in the Valley of Pleasure
Share this story
Delete

A Positive Story at the End of a Long Year

1 Share

This is short story about a student finding something helpful in class and making my day, preceded by a long-ish back story.

In my programming languages course yesterday, I did a session on optimization. It's a topic of some importance, and students are usually interested in what it means for an interpreter or compiler to "optimize" code. I like to show students a concrete example that demonstrates the value of an optimization. Given where we are in the course and the curriculum, though, it would be difficult to do that with a full-featured language such as Python or Java, or even Racket. On the other end of the spectrum, the little languages they have been implementing and using all semester are too simple to benefit from meaningful optimization.

I found a sweet spot in between these extremes with BF. (Language alert!) I suppose it is more accurate to say that Eli Bendersky found the sweet spot, and I found Bendersky's work. Back in 2017, he wrote a series of blog posts on how to write just-in-time compilers, using BF as his playground. The first article in that series inspired me to implement something similar in Python and to adapt it for use with my students.

BF is well-suited for my purposes. It is very simple language, consisting of only eight low-level operators. It is possible to write a small interpreter for BF that students with only a background in data structures can understand. Even so, the language is Turing complete, which means that we can write interesting and arbitrarily complex programs.

The low-level simplicity of BF combines with its Turing completeness to create programs that are horribly inefficient if they are interpreted in a naive manner. There are many simple ways to optimize BF programs, including creating a jump table to speed up loops and parsing runs of identical opcodes (moves, increments, and decrements) as more efficient higher-level operators. Even better, the code to implement these optimizations is also understandable to a student with only data structures and a little background in programming languages.

My session is built around a pair of interpreters, one written in a naive fashion and the other implementing an optimization. This semester, we preprocessed BF programs to compute a table that makes jumping to the beginning or end of a loop an O(1) operation just like BF's other six primitives. The speed-up on big BF programs, such as factoring large numbers or computing a Mandelbrot set, is impressive.

Now to the story.

At the end of class, I talk a bit about esoteric languages more broadly as a way for programmers to test the boundaries of programming language design, or simply to have fun. I get to tell students a story about a four-hour flight back from OOPSLA one year during which I decided to roll a quick interpreter for Ook in Scheme. (What can I say; programming is fun.)

To illustrate some of the fun and show that programmers can be artists, too, I demo programs in the language Piet, which is named for the Dutch abstract painter Piet Mondrian. He created paintings that look like this:

a Piet program that prints 'Piet'

That is not a Mondrian, but it is a legal program in the Piet language. It prints 'Piet'. Here is another legal Piet program:

a Piet program that prints 'Hello, World'

It prints "Hello, World". Here's another:

a Piet program that determines if a number is prime

That program reads an integer from standard input, determines whether it is prime or not, and prints 'Y' or 'N'. Finally, how about this:

a Piet program that prints 'tetris'

If you are a certain age, you may notice something special about this image: It is made up exclusively of Tetris pieces. The program prints... "Tetris". Programming truly is an art!

One of my students was inspired. While reviewing the session notes, he searched for more information about Piet online and found this interactive editor. He then used it to create a Piet program in honor of a friend of his who passed away earlier this semester. It prints the Xbox gamertag of his late friend. In his email to me, he said that writing this program was therapeutic.

I'm not sure one of my class sessions has ever had a more important outcome. I'm also not sure that I have ever been happier to receive email from a student.

This has been a tough year for most everyone, and especially for students who are struggling with isolation and countermeasures against a nasty virus. I'm so glad that programming gave one student a little solace, at least for an evening. I'm also glad he shared his story with me.

Read the whole story
faried
1266 days ago
reply
Lost in the Valley of Pleasure
Share this story
Delete

Writing Code that is Easy to Delete

1 Comment and 2 Shares

Last week someone tweeted a link to Write code that is easy to delete, not easy to extend. It contains a lot of great advice on how to create codebases that are easy to maintain and easy to change, the latter being an essential feature of almost any code that is the former. I liked this article so much that I wanted to share some of its advice here. What follows are a few of the many one- and two-liners that serve as useful slogans for building maintainable software, with light commentary.

... repeat yourself to avoid creating dependencies, but don't repeat yourself to manage them.

This line from the first page of the paper hooked me. I'm not sure I had ever had this thought, at least not so succinctly, but it captures a bit of understanding that I think I had. Reading this, I knew I wanted to read the rest of the article.

Make a util directory and keep different utilities in different files. A single util file will always grow until it is too big and yet too hard to split apart. Using a single util file is unhygienic.

This isn't the sort of witticism that I quote in the rest of this post, but its solid advice that I've come to live by over the years. I have this pattern.

Boiler plate is a lot like copy-pasting, but you change some of the code in a different place each time, rather than the same bit over and over.

I really like the author's distinction between boilerplate and copy-and-paste. Copy-and-paste has valuable uses (heresy, I know; more later), whereas boilerplate sucks the joy out of almost every programmer's life.

You are writing more lines of code, but you are writing those lines of code in the easy-to-delete parts.

Another neat distinction. Even when we understand that lines of code are an expense as much as (or instead of) an investment, we know that sometimes we have write more code. Just do it in units that are easy to delete.

A lesson in separating concerns, from Python libraries:

requests is about popular http adventures, urllib3 is about giving you the tools to choose your own adventure.

Layers! I have had users of both of these libraries suggest that the other should not exist, but they serve different audiences. They meet different needs in a way that that more than makes up for the cost of the supposed duplication.

Building a pleasant to use API and building an extensible API are often at odds with each other.

There's nothing earth-shattering in this observation, but I like to highlight different kinds of trade-off whenever I can. Every important decision we make writing programs is a trade-off.

Good APIs are designed with empathy for the programmers who will use it, and layering is realising we can't please everyone at once.

This advice elaborates on the quote earlier to repeat code in order not to create dependencies, but not to manage them. Creating a separate API is one way to avoid dependencies to code that are hard to delete.

Sometimes it's easier to delete one big mistake than try to delete 18 smaller interleaved mistakes.

Sometimes it really is best to write a big chunk of code precisely because it is easy to delete. An idea that is distributed throughout a bunch of functions or modules has to be disentangled before you can delete it.

Becoming a professional software developer is accumulating a back-catalogue of regrets and mistakes.

I'm going to use this line in my spring Programming Languages class. There are unforeseen advantages to all the practice we profs ask students to do. That's where experience comes from.

We are not building modules around being able to re-use them, but being able to change them.

This is another good bit of advice for my students, though I'll write this one more clearly. When students learn to program, textbooks often teach them that the main reason to write a function is that you can reuse it later, thus saving the effort of writing similar code again. That's certainly one benefit of writing a function, but experienced programmers know that there are other big wins in creating functions, classes, and modules, and that these wins are often even more valuable than reuse. In my courses, I try to help students appreciate the value of names in understanding and modifying code. Modularity also makes it easier to change and, yes, delete code. Unfortunately, students don't always get the right kind of experience in their courses to develop this deeper understanding.

Although the single responsibility principle suggests that "each module should only handle one hard problem", it is more important that "each hard problem is only handled by one module".

Lovely. The single module that handles a hard problem is a point of leverage. It can be deleted when the problem goes away. It can be rewritten from scratch when you understand the problem better or when the context around the problem changes.

This line is the heart of the article:

The strategies I've talked about -- layering, isolation, common interfaces, composition -- are not about writing good software, but how to build software that can change over time.

Good software is software that can you can change. One way to create software you can change is to write code that you can easily replace.

Good code isn't about getting it right the first time. Good code is just legacy code that doesn't get in the way.

A perfect aphorism to close to the article, and to perfect way to close this post: Good code is legacy code that doesn't get in the way.

Read the whole story
faried
2093 days ago
reply
Lost in the Valley of Pleasure
Share this story
Delete
1 public comment
kbrint
2111 days ago
reply
Liked the commentary and the linked article.
sulrich
2111 days ago
thanks for the ref. it was worth going back and checking out some of the other posts on the sites ref'd.

Why Laziness Matters

1 Share

Should a programming language be lazy by default? Robert Harper says no. Lennart Augustsson says yes. No matter who is right, I say all computer scientists should become fluent in a lazy language, whether or not they speak it in daily life.

My evidence is a post by Russ Cox on parsing with derivatives: a very experienced programmer very convincingly argues why a parsing algorithm has exponential time complexity. But the claims are very wrong; Adams, Hollenbeck, and Might proved the algorithm is cubic.

How did he err so badly? Did he underestimate the power of lazy evaluation?

I once exclusively wrote eager code, and I imagine my younger self would have agreed with his analysis without a second thought. Today I know better. Marvel at these lines by Doug McIlroy:

int fs = 0 : zipWith (/) fs [1..]    -- integral from 0 to x
sins = int coss
coss = 1 - int sins

It seems too good to be true. Indistinguishable from magic perhaps. But somehow it all works when lazily evaluated. Beware of summarily dismissing lazy code because it looks implausibly amazing.

Also consider an earlier article by the same author on regular expressions. Again, a very experienced programmer very convincingly argues why a parsing algorithm has exponential time complexity. In this post, however, the claims are solid, and backed up by graphs of running times. (It’s worth reading by the way: it tells the tragedy of how popular regular expression implementations became sluggish twisted mockeries of true regular expressions, while offering hope for the future. My only criticism is it fails to mention regular expression derivatives.)

Why does the erroneous post lack similar graphs? Why didn’t the author throw some code together and benchmark it to produce damning evidence?

Perhaps he thought it was too tedious. This would imply unfamiliarity with lazy languages, because prototyping parsing with derivatives in Haskell is easier than criticizing it.

Preliminaries

We define a Pe data structure to represent parsing expressions, that is, the right-hand side of the production rules of a grammar.

import Control.Arrow
import Control.Monad.State
import qualified Data.Map as M
import qualified Data.Set as S

-- NT = non-terminal. (:.) = concatenation.
data Pe = NT String | Eps Char | Nul | Ch Char | Or [Pe] | Pe :. Pe | Del Pe

Although it represents the empty string, the Eps (for epsilon) expression holds a character that winds up in the abstract syntax tree (AST) returned by the parser. Similarly, the Del (for delta) expression, which is only generated internally, holds an expression which later helps build an AST.

A context-free grammar maps non-terminal symbols to parsing expressions:

type Grammar = M.Map String Pe

Our ASTs are full binary trees whose leaf nodes are characters (the free magma on the alphabet). The tree structure captures the order the production rules are applied.

data Ast = Bad | Lf Char | Ast :@ Ast deriving Show

isBad :: Ast -> Bool
isBad Bad = True
isBad _   = False

The Bad AST is returned for unparseable strings. An alternative is to drop Bad and replace Ast with Maybe Ast throughout our code.

A fancier parser might return a parse forest, that is, all parse trees for a given input. Ours simply settles on one parse tree.

Parsing with derivatives

To parse an input string, we first take successive derivatives of the start symbol with respect to each character of the input, taking care to leave bread crumbs in the Eps and Del expressions to record consumed characters. (The Del constructor is named for the delta symbol from the paper, but I also think of it as "deleted", because it remembers what has just been deleted from the input.)

Then the string is accepted if and only if the resulting expression is nullable, that is, accepts the empty string. As we traverse the expression to determine nullability, we also build an AST to return.

We memoize derivatives by adding entries to a state of type Grammar. Initially, this cache contains only the input grammar, mapping nonterminal symbols to Pe values. Later, we place a derivative at the key formed by concatenating the characters involved in the derivative with the nonterminal symbol being derived.

For example, if S is a nonterminal in the input grammar, then abS maps to derive 'a' (derive 'b' (NT "S")). We assume no nonterminal symbol in the input grammar is a suffix of any other nonterminal symbol, which is fine for a prototype.

It may help to imagine the grammar growing over time, gaining new production rules as we process input characters. Indeed, we consider nonterminals to refer to both nonterminals in the input grammar as well as their derivatives.

parse :: Grammar -> String -> String -> Ast
parse g start s = evalState (parseNull $ NT $ reverse s ++ start) g

Computing nullability requires finding a least fixed point. I found this the toughest part of the algorithm, partly because they never taught fixed point theory when I was in school. For some reason, the method reminds me of Hopcroft’s algorithm to minimize a DFA, where we repeatedly refine a partition until we reach a stable answer.

We initially guess each nonterminal is not nullable, which means it corresponds to the Bad AST. On encountering a nonterminal, if we’ve already seen it, then return our guess for that nonterminal. Otherwise, it’s the first time we’ve seen it and instead of guessing, we recursively traverse its corresponding expression. In doing so, we may discover our guess is wrong, so we correct it if necessary before returning an AST.

We repeat until our guesses stabilize. Guesses never change from a good AST to Bad, and the map of all guesses only changes if a guess is revised from Bad to a good AST. We exploit these facts to simplify our code slightly.

parseNull :: Pe -> State Grammar Ast
parseNull pe = leastFix M.empty where
  leastFix guessed = do
    (b, (_, guessed')) <- runStateT (visit pe) (S.empty, guessed)
    if M.size guessed == M.size guessed' then pure b else leastFix guessed'

visit :: Pe -> StateT (S.Set String, M.Map String Ast) (State Grammar) Ast
visit pe = case pe of
  Eps x  -> pure $ Lf x
  Del x  -> visit x
  Nul    -> pure Bad
  Ch _   -> pure Bad
  Or xs  -> chainsaw <$> mapM visit xs
  x :. y -> mul <$> visit x <*> visit y
  NT s -> do
    (seen, guessed) <- get
    case () of
      () | Just x <- M.lookup s guessed -> pure x
         | S.member s seen -> pure Bad
         | otherwise -> do
           modify $ first $ S.insert s
           b <- visit =<< lift (memoDerive s)
           unless (isBad b) $ modify $ second $ M.insert s b
           pure b

mul :: Ast -> Ast -> Ast
mul Bad _ = Bad
mul _ Bad = Bad
mul x y   = x :@ y

-- | Helps cut a non-empty parse forest down to one tree.
chainsaw :: [Ast] -> Ast
chainsaw xs | null xs'   = Bad
            | otherwise  = head xs'
            where xs' = filter (not . isBad) xs

Memoized derivatives are straightforward. For computing derivatives, we translate the rules given in the paper, and for memoization, on discovering a missing entry, we insert a knot-tying value before recursing, and replace it with the result of the recursion afteward.

memoDerive :: String -> State Grammar Pe
memoDerive cs@(c:s) = do
  m <- get
  unless (M.member cs m) $ do
    modify $ M.insert cs $ NT cs
    d <- derive c =<< memoDerive s
    modify $ M.insert cs d
  gets (M.! cs)
memoDerive _ = error "unreachable"

derive :: Char -> Pe -> State Grammar Pe
derive c pe = case pe of
  NT s             -> pure $ NT $ c:s
  Ch x | x == c    -> pure $ Eps x
  Or xs            -> Or <$> mapM (derive c) xs
  Del x :. y       -> (Del x :.) <$> derive c y
  x :. y           -> do
    b <- parseNull x
    dx <- derive c x
    if isBad b then pure $ dx :. y else do
      dy <- derive c y
      pure $ Or [dx :. y, Del x :. dy]
  _                -> pure Nul

Here’s the grammar that Cox claims will grind our parser to a halt:

cox :: Grammar
cox = M.fromList
  [ ("S", NT "T")
  , ("T", Or [NT "T" :. (Ch '+' :. NT "T"), NT "N"])
  , ("N", Ch '1')
  ]

Let’s try it on a small input in an interactive interpreter:

parse cox "S" "1+1+1"

The parser picks a particular parse tree:

(Lf '1' :@ (Lf '+' :@ Lf '1')) :@ (Lf '+' :@ Lf '1')

How about all strings of length 7 consisting of 1 or +?

filter (not . isBad . parse cox "S") $ replicateM 7 "+1"

Thankfully, we get:

["1+1+1+1"]

At last, it’s time to demolish Cox’s claims. We parse an 80-character input with a typo near the end:

main :: IO ()
main = print $ parse cox "S" $ concat (replicate 39 "1+") ++ "+1"

Our prototype is awful. We really should:

  • Add a slimmed down version of parseNull that returns a boolean instead of an AST, and call this in derive. We only want to recover the AST once the whole string has been parsed; the rest of the time, we only care whether an expression is nullable.

  • Use a better algorithm for finding the least fixed point. We’ve perhaps chosen the clunkiest and most obvious method.

  • Remove a layer of indirection when tying the knot. That is, point to a node directly rather than a string (which requires another lookup to get at the node).

  • Apply algebraic identities to reduce the number of nodes in parsing expressions and abstract syntax trees.

And yet, on my laptop:

Bad

real    0m0.220s
user    0m0.215s
sys     0m0.005s

Clearly, parsing with derivatives is efficient when run on the allegedly exponential-running-time example given by Cox.

The moral of the story

It’s best to test drive an algorithm before condemning it. If we see hilariously bad running times, then we can include them to hammer our points home. If we see surprisingly good running times, then there’s a mistake in our reasoning and we should keep quiet until we successfully attack the algorithm from another angle. (Cox rightly notes parsing with derivatives forgoes two key properties of yacc: linear running time and ambiguity detection. If only he had focused on these trade-offs.)

Is this practicable for parsing with derivatives? Well, we have presented an entire program, yet we have written less code than appears in Cox’s excellent article on regular expressions, which quotes just a few choice cuts from a presumably complete program. Indeed, with a splash of HTML, we can easily build an interactive online demo of parsing with derivatives.

The existence of the flawed post indicates no such sanity check was done. This was caused by poor understanding of lazy evaluation, or because it was deemed too troublesome to implement a lazy algorithm. Both problems are solved by learning a lazy language.

In sum, insufficient experience with lazy evaluation leads to faulty time complexity analysis. Therefore we should all be comfortable with lazy languages so computer science can progress unimpeded.

Read the whole story
faried
2280 days ago
reply
Lost in the Valley of Pleasure
Share this story
Delete

Sometimes, We Need to Make a Better Tool

1 Share

I learned about a couple of cool CLI tools from Nikita Sobolev's Using Better CLIs. hub and tig look like they may be worth a deeper look. This article also reminded me of one of the examples in the blog entry I rmed the other day. It reflects a certain attitude about languages and development.

One of the common complaints about OOP is that what would be a single function in other programming styles usually ends up distributed across multiple classes in an OO program. For example, instead of:

    void draw(Shape s) {
       case s of
          Circle : [code for circle]
          Square : [code for square]
          ...
    }
the code for the individual shapes ends up in the classes for Circle, Square, and so on. If you have to change the drawing code for all of the shapes, you have to track down all of the classes and modify them individually.

This is true, and it is a serious issue. We can debate the relative benefits and costs of the different designs, of course, but we might also think about ways that our development tools can help us.

As a grad student in the early 1990s, I worked in a research group that used VisualWorks Smalltalk to build all of its software. Even within a single Smalltalk image, we faced this code-all-over-the-place problem. We were adding methods to classes and modifying methods all the time as part of our research. We spent a fair amount of time navigating from class to class to work on the individual methods.

Eventually, one of my fellow students had an epiphany: we could write a new code browser. We would open this browser on a particular class, and the browser would provide a pane listing and all of its subclasses, and all of their subclasses. When we selected a method in the root class, the browser enabled us to click on any of the subclasses, see the code for the subclass's corresponding method, and edit it there. If the class didn't have an overriding method, we could add one in the empty pane, with the method signature supplied by the browser.

This browser didn't solve all of the problems we had learning to manage a large code base spread out over many classes, but it was a huge win for dealing with the specific issue of an algorithm being distributed across several kinds of object. It also taught me two things:

  • to appreciate the level of control that Smalltalk gave developers to inspect code and shape the development experience
  • to appreciate the mindset that creating new tools is the way to mitigate many problems in software development, if not to solve them completely

The tool-making mindset is one that I came to appreciate and understand more and more as the years past. I'm disappointed whenever I don't put it to good use, but oftentimes I wise up and make the tools I need to help me do my work.

Read the whole story
faried
2534 days ago
reply
Lost in the Valley of Pleasure
Share this story
Delete
Next Page of Stories