I'm addicted to caffeine, I watch TV shows (but not TV), and I'm on lambdamoo.
13 stories
·
1 follower

My book, Algorithms and Misinformation

1 Comment
Misinformation and disinformation are the biggest problems on the internet.

To solve a problem, you need to understand the problem. In Algorithms and Misinformation: Why Wisdom of the Crowds Failed the Internet and How to Fix It, I claim that the problem is not that misinformation exists, but that so many people see it. I explain why algorithms amplify scams and propaganda, how it easily can happen unintentionally, and offer solutions.

You can read much of the book for free. If you want a single article summary, this overview describes the entire book: If you are interested in what you might get from skimming the book, you might be interested in a bit more: If you want part of what you might get from reading the entire book, you may want all the excerpts: I wanted this book to be a part of the debate on how to solve misinformation and disinformation on the internet. This book offers some practical solutions. It was intended to be an essential part of the discussion about viable solutions to what has become one of the biggest problems of our time.

I wrote, developed, and edited this book over four years. It was under contract with two agents for a year but was never accepted by a publisher. The book will not be published. The full manuscript had many more examples, interviews, and stories, but you can get some of what you would have gotten by reading the book by reading all the excerpts above.

Some might want to jump straight to ideas for solutions. I think solutions depend on who you are.

For those inside of tech companies, because it's easy for executives to unintentionally cause search and recommendations to amplify scams, it's important for everyone to question what algorithms are optimized for and make sure they point toward the long-term growth of the company.

For the average person, because the book shows companies actually make more money when they don't allow their algorithms to promote scams, this book gives hope that complaining about scammy products and stopping use of those products will change the internet we use every day.

For policy makers, because it's hard to regulate AI but easy to regulate what they already know how to regulate, this book claims they should focus on regulating scammy advertising since that funds misinformation, then ramp up antitrust efforts to increase consumers' ability to switch to products that haven't been enshittified and further raise long-term costs on companies that enshittify their products.

Why these are the solutions requires exploring the problem. The problem is not that misinformation exists, but that people see misinformation and disinformation. The goal should be to reduce it to nuisance levels.

Through stories, examples, and research, this book showed why so many people see misinformation and disinformation, that it is often unintentional, and that it doesn't maximize revenue for companies. Understanding why we see so much misinformation is the key to coming up with practical solutions.

I hope others find this useful. If you do, please let me know.
Read the whole story
faried
96 days ago
reply
toc of a book that will not be published
Lost in the Valley of Pleasure
Share this story
Delete

A Positive Story at the End of a Long Year

1 Share

This is short story about a student finding something helpful in class and making my day, preceded by a long-ish back story.

In my programming languages course yesterday, I did a session on optimization. It's a topic of some importance, and students are usually interested in what it means for an interpreter or compiler to "optimize" code. I like to show students a concrete example that demonstrates the value of an optimization. Given where we are in the course and the curriculum, though, it would be difficult to do that with a full-featured language such as Python or Java, or even Racket. On the other end of the spectrum, the little languages they have been implementing and using all semester are too simple to benefit from meaningful optimization.

I found a sweet spot in between these extremes with BF. (Language alert!) I suppose it is more accurate to say that Eli Bendersky found the sweet spot, and I found Bendersky's work. Back in 2017, he wrote a series of blog posts on how to write just-in-time compilers, using BF as his playground. The first article in that series inspired me to implement something similar in Python and to adapt it for use with my students.

BF is well-suited for my purposes. It is very simple language, consisting of only eight low-level operators. It is possible to write a small interpreter for BF that students with only a background in data structures can understand. Even so, the language is Turing complete, which means that we can write interesting and arbitrarily complex programs.

The low-level simplicity of BF combines with its Turing completeness to create programs that are horribly inefficient if they are interpreted in a naive manner. There are many simple ways to optimize BF programs, including creating a jump table to speed up loops and parsing runs of identical opcodes (moves, increments, and decrements) as more efficient higher-level operators. Even better, the code to implement these optimizations is also understandable to a student with only data structures and a little background in programming languages.

My session is built around a pair of interpreters, one written in a naive fashion and the other implementing an optimization. This semester, we preprocessed BF programs to compute a table that makes jumping to the beginning or end of a loop an O(1) operation just like BF's other six primitives. The speed-up on big BF programs, such as factoring large numbers or computing a Mandelbrot set, is impressive.

Now to the story.

At the end of class, I talk a bit about esoteric languages more broadly as a way for programmers to test the boundaries of programming language design, or simply to have fun. I get to tell students a story about a four-hour flight back from OOPSLA one year during which I decided to roll a quick interpreter for Ook in Scheme. (What can I say; programming is fun.)

To illustrate some of the fun and show that programmers can be artists, too, I demo programs in the language Piet, which is named for the Dutch abstract painter Piet Mondrian. He created paintings that look like this:

a Piet program that prints 'Piet'

That is not a Mondrian, but it is a legal program in the Piet language. It prints 'Piet'. Here is another legal Piet program:

a Piet program that prints 'Hello, World'

It prints "Hello, World". Here's another:

a Piet program that determines if a number is prime

That program reads an integer from standard input, determines whether it is prime or not, and prints 'Y' or 'N'. Finally, how about this:

a Piet program that prints 'tetris'

If you are a certain age, you may notice something special about this image: It is made up exclusively of Tetris pieces. The program prints... "Tetris". Programming truly is an art!

One of my students was inspired. While reviewing the session notes, he searched for more information about Piet online and found this interactive editor. He then used it to create a Piet program in honor of a friend of his who passed away earlier this semester. It prints the Xbox gamertag of his late friend. In his email to me, he said that writing this program was therapeutic.

I'm not sure one of my class sessions has ever had a more important outcome. I'm also not sure that I have ever been happier to receive email from a student.

This has been a tough year for most everyone, and especially for students who are struggling with isolation and countermeasures against a nasty virus. I'm so glad that programming gave one student a little solace, at least for an evening. I'm also glad he shared his story with me.

Read the whole story
faried
1092 days ago
reply
Lost in the Valley of Pleasure
Share this story
Delete

Writing Code that is Easy to Delete

1 Comment and 2 Shares

Last week someone tweeted a link to Write code that is easy to delete, not easy to extend. It contains a lot of great advice on how to create codebases that are easy to maintain and easy to change, the latter being an essential feature of almost any code that is the former. I liked this article so much that I wanted to share some of its advice here. What follows are a few of the many one- and two-liners that serve as useful slogans for building maintainable software, with light commentary.

... repeat yourself to avoid creating dependencies, but don't repeat yourself to manage them.

This line from the first page of the paper hooked me. I'm not sure I had ever had this thought, at least not so succinctly, but it captures a bit of understanding that I think I had. Reading this, I knew I wanted to read the rest of the article.

Make a util directory and keep different utilities in different files. A single util file will always grow until it is too big and yet too hard to split apart. Using a single util file is unhygienic.

This isn't the sort of witticism that I quote in the rest of this post, but its solid advice that I've come to live by over the years. I have this pattern.

Boiler plate is a lot like copy-pasting, but you change some of the code in a different place each time, rather than the same bit over and over.

I really like the author's distinction between boilerplate and copy-and-paste. Copy-and-paste has valuable uses (heresy, I know; more later), whereas boilerplate sucks the joy out of almost every programmer's life.

You are writing more lines of code, but you are writing those lines of code in the easy-to-delete parts.

Another neat distinction. Even when we understand that lines of code are an expense as much as (or instead of) an investment, we know that sometimes we have write more code. Just do it in units that are easy to delete.

A lesson in separating concerns, from Python libraries:

requests is about popular http adventures, urllib3 is about giving you the tools to choose your own adventure.

Layers! I have had users of both of these libraries suggest that the other should not exist, but they serve different audiences. They meet different needs in a way that that more than makes up for the cost of the supposed duplication.

Building a pleasant to use API and building an extensible API are often at odds with each other.

There's nothing earth-shattering in this observation, but I like to highlight different kinds of trade-off whenever I can. Every important decision we make writing programs is a trade-off.

Good APIs are designed with empathy for the programmers who will use it, and layering is realising we can't please everyone at once.

This advice elaborates on the quote earlier to repeat code in order not to create dependencies, but not to manage them. Creating a separate API is one way to avoid dependencies to code that are hard to delete.

Sometimes it's easier to delete one big mistake than try to delete 18 smaller interleaved mistakes.

Sometimes it really is best to write a big chunk of code precisely because it is easy to delete. An idea that is distributed throughout a bunch of functions or modules has to be disentangled before you can delete it.

Becoming a professional software developer is accumulating a back-catalogue of regrets and mistakes.

I'm going to use this line in my spring Programming Languages class. There are unforeseen advantages to all the practice we profs ask students to do. That's where experience comes from.

We are not building modules around being able to re-use them, but being able to change them.

This is another good bit of advice for my students, though I'll write this one more clearly. When students learn to program, textbooks often teach them that the main reason to write a function is that you can reuse it later, thus saving the effort of writing similar code again. That's certainly one benefit of writing a function, but experienced programmers know that there are other big wins in creating functions, classes, and modules, and that these wins are often even more valuable than reuse. In my courses, I try to help students appreciate the value of names in understanding and modifying code. Modularity also makes it easier to change and, yes, delete code. Unfortunately, students don't always get the right kind of experience in their courses to develop this deeper understanding.

Although the single responsibility principle suggests that "each module should only handle one hard problem", it is more important that "each hard problem is only handled by one module".

Lovely. The single module that handles a hard problem is a point of leverage. It can be deleted when the problem goes away. It can be rewritten from scratch when you understand the problem better or when the context around the problem changes.

This line is the heart of the article:

The strategies I've talked about -- layering, isolation, common interfaces, composition -- are not about writing good software, but how to build software that can change over time.

Good software is software that can you can change. One way to create software you can change is to write code that you can easily replace.

Good code isn't about getting it right the first time. Good code is just legacy code that doesn't get in the way.

A perfect aphorism to close to the article, and to perfect way to close this post: Good code is legacy code that doesn't get in the way.

Read the whole story
faried
1919 days ago
reply
Lost in the Valley of Pleasure
Share this story
Delete
1 public comment
kbrint
1937 days ago
reply
Liked the commentary and the linked article.
sulrich
1937 days ago
thanks for the ref. it was worth going back and checking out some of the other posts on the sites ref'd.

Why Laziness Matters

1 Share

Should a programming language be lazy by default? Robert Harper says no. Lennart Augustsson says yes. No matter who is right, I say all computer scientists should become fluent in a lazy language, whether or not they speak it in daily life.

My evidence is a post by Russ Cox on parsing with derivatives: a very experienced programmer very convincingly argues why a parsing algorithm has exponential time complexity. But the claims are very wrong; Adams, Hollenbeck, and Might proved the algorithm is cubic.

How did he err so badly? Did he underestimate the power of lazy evaluation?

I once exclusively wrote eager code, and I imagine my younger self would have agreed with his analysis without a second thought. Today I know better. Marvel at these lines by Doug McIlroy:

int fs = 0 : zipWith (/) fs [1..]    -- integral from 0 to x
sins = int coss
coss = 1 - int sins

It seems too good to be true. Indistinguishable from magic perhaps. But somehow it all works when lazily evaluated. Beware of summarily dismissing lazy code because it looks implausibly amazing.

Also consider an earlier article by the same author on regular expressions. Again, a very experienced programmer very convincingly argues why a parsing algorithm has exponential time complexity. In this post, however, the claims are solid, and backed up by graphs of running times. (It’s worth reading by the way: it tells the tragedy of how popular regular expression implementations became sluggish twisted mockeries of true regular expressions, while offering hope for the future. My only criticism is it fails to mention regular expression derivatives.)

Why does the erroneous post lack similar graphs? Why didn’t the author throw some code together and benchmark it to produce damning evidence?

Perhaps he thought it was too tedious. This would imply unfamiliarity with lazy languages, because prototyping parsing with derivatives in Haskell is easier than criticizing it.

Preliminaries

We define a Pe data structure to represent parsing expressions, that is, the right-hand side of the production rules of a grammar.

import Control.Arrow
import Control.Monad.State
import qualified Data.Map as M
import qualified Data.Set as S

-- NT = non-terminal. (:.) = concatenation.
data Pe = NT String | Eps Char | Nul | Ch Char | Or [Pe] | Pe :. Pe | Del Pe

Although it represents the empty string, the Eps (for epsilon) expression holds a character that winds up in the abstract syntax tree (AST) returned by the parser. Similarly, the Del (for delta) expression, which is only generated internally, holds an expression which later helps build an AST.

A context-free grammar maps non-terminal symbols to parsing expressions:

type Grammar = M.Map String Pe

Our ASTs are full binary trees whose leaf nodes are characters (the free magma on the alphabet). The tree structure captures the order the production rules are applied.

data Ast = Bad | Lf Char | Ast :@ Ast deriving Show

isBad :: Ast -> Bool
isBad Bad = True
isBad _   = False

The Bad AST is returned for unparseable strings. An alternative is to drop Bad and replace Ast with Maybe Ast throughout our code.

A fancier parser might return a parse forest, that is, all parse trees for a given input. Ours simply settles on one parse tree.

Parsing with derivatives

To parse an input string, we first take successive derivatives of the start symbol with respect to each character of the input, taking care to leave bread crumbs in the Eps and Del expressions to record consumed characters. (The Del constructor is named for the delta symbol from the paper, but I also think of it as "deleted", because it remembers what has just been deleted from the input.)

Then the string is accepted if and only if the resulting expression is nullable, that is, accepts the empty string. As we traverse the expression to determine nullability, we also build an AST to return.

We memoize derivatives by adding entries to a state of type Grammar. Initially, this cache contains only the input grammar, mapping nonterminal symbols to Pe values. Later, we place a derivative at the key formed by concatenating the characters involved in the derivative with the nonterminal symbol being derived.

For example, if S is a nonterminal in the input grammar, then abS maps to derive 'a' (derive 'b' (NT "S")). We assume no nonterminal symbol in the input grammar is a suffix of any other nonterminal symbol, which is fine for a prototype.

It may help to imagine the grammar growing over time, gaining new production rules as we process input characters. Indeed, we consider nonterminals to refer to both nonterminals in the input grammar as well as their derivatives.

parse :: Grammar -> String -> String -> Ast
parse g start s = evalState (parseNull $ NT $ reverse s ++ start) g

Computing nullability requires finding a least fixed point. I found this the toughest part of the algorithm, partly because they never taught fixed point theory when I was in school. For some reason, the method reminds me of Hopcroft’s algorithm to minimize a DFA, where we repeatedly refine a partition until we reach a stable answer.

We initially guess each nonterminal is not nullable, which means it corresponds to the Bad AST. On encountering a nonterminal, if we’ve already seen it, then return our guess for that nonterminal. Otherwise, it’s the first time we’ve seen it and instead of guessing, we recursively traverse its corresponding expression. In doing so, we may discover our guess is wrong, so we correct it if necessary before returning an AST.

We repeat until our guesses stabilize. Guesses never change from a good AST to Bad, and the map of all guesses only changes if a guess is revised from Bad to a good AST. We exploit these facts to simplify our code slightly.

parseNull :: Pe -> State Grammar Ast
parseNull pe = leastFix M.empty where
  leastFix guessed = do
    (b, (_, guessed')) <- runStateT (visit pe) (S.empty, guessed)
    if M.size guessed == M.size guessed' then pure b else leastFix guessed'

visit :: Pe -> StateT (S.Set String, M.Map String Ast) (State Grammar) Ast
visit pe = case pe of
  Eps x  -> pure $ Lf x
  Del x  -> visit x
  Nul    -> pure Bad
  Ch _   -> pure Bad
  Or xs  -> chainsaw <$> mapM visit xs
  x :. y -> mul <$> visit x <*> visit y
  NT s -> do
    (seen, guessed) <- get
    case () of
      () | Just x <- M.lookup s guessed -> pure x
         | S.member s seen -> pure Bad
         | otherwise -> do
           modify $ first $ S.insert s
           b <- visit =<< lift (memoDerive s)
           unless (isBad b) $ modify $ second $ M.insert s b
           pure b

mul :: Ast -> Ast -> Ast
mul Bad _ = Bad
mul _ Bad = Bad
mul x y   = x :@ y

-- | Helps cut a non-empty parse forest down to one tree.
chainsaw :: [Ast] -> Ast
chainsaw xs | null xs'   = Bad
            | otherwise  = head xs'
            where xs' = filter (not . isBad) xs

Memoized derivatives are straightforward. For computing derivatives, we translate the rules given in the paper, and for memoization, on discovering a missing entry, we insert a knot-tying value before recursing, and replace it with the result of the recursion afteward.

memoDerive :: String -> State Grammar Pe
memoDerive cs@(c:s) = do
  m <- get
  unless (M.member cs m) $ do
    modify $ M.insert cs $ NT cs
    d <- derive c =<< memoDerive s
    modify $ M.insert cs d
  gets (M.! cs)
memoDerive _ = error "unreachable"

derive :: Char -> Pe -> State Grammar Pe
derive c pe = case pe of
  NT s             -> pure $ NT $ c:s
  Ch x | x == c    -> pure $ Eps x
  Or xs            -> Or <$> mapM (derive c) xs
  Del x :. y       -> (Del x :.) <$> derive c y
  x :. y           -> do
    b <- parseNull x
    dx <- derive c x
    if isBad b then pure $ dx :. y else do
      dy <- derive c y
      pure $ Or [dx :. y, Del x :. dy]
  _                -> pure Nul

Here’s the grammar that Cox claims will grind our parser to a halt:

cox :: Grammar
cox = M.fromList
  [ ("S", NT "T")
  , ("T", Or [NT "T" :. (Ch '+' :. NT "T"), NT "N"])
  , ("N", Ch '1')
  ]

Let’s try it on a small input in an interactive interpreter:

parse cox "S" "1+1+1"

The parser picks a particular parse tree:

(Lf '1' :@ (Lf '+' :@ Lf '1')) :@ (Lf '+' :@ Lf '1')

How about all strings of length 7 consisting of 1 or +?

filter (not . isBad . parse cox "S") $ replicateM 7 "+1"

Thankfully, we get:

["1+1+1+1"]

At last, it’s time to demolish Cox’s claims. We parse an 80-character input with a typo near the end:

main :: IO ()
main = print $ parse cox "S" $ concat (replicate 39 "1+") ++ "+1"

Our prototype is awful. We really should:

  • Add a slimmed down version of parseNull that returns a boolean instead of an AST, and call this in derive. We only want to recover the AST once the whole string has been parsed; the rest of the time, we only care whether an expression is nullable.

  • Use a better algorithm for finding the least fixed point. We’ve perhaps chosen the clunkiest and most obvious method.

  • Remove a layer of indirection when tying the knot. That is, point to a node directly rather than a string (which requires another lookup to get at the node).

  • Apply algebraic identities to reduce the number of nodes in parsing expressions and abstract syntax trees.

And yet, on my laptop:

Bad

real    0m0.220s
user    0m0.215s
sys     0m0.005s

Clearly, parsing with derivatives is efficient when run on the allegedly exponential-running-time example given by Cox.

The moral of the story

It’s best to test drive an algorithm before condemning it. If we see hilariously bad running times, then we can include them to hammer our points home. If we see surprisingly good running times, then there’s a mistake in our reasoning and we should keep quiet until we successfully attack the algorithm from another angle. (Cox rightly notes parsing with derivatives forgoes two key properties of yacc: linear running time and ambiguity detection. If only he had focused on these trade-offs.)

Is this practicable for parsing with derivatives? Well, we have presented an entire program, yet we have written less code than appears in Cox’s excellent article on regular expressions, which quotes just a few choice cuts from a presumably complete program. Indeed, with a splash of HTML, we can easily build an interactive online demo of parsing with derivatives.

The existence of the flawed post indicates no such sanity check was done. This was caused by poor understanding of lazy evaluation, or because it was deemed too troublesome to implement a lazy algorithm. Both problems are solved by learning a lazy language.

In sum, insufficient experience with lazy evaluation leads to faulty time complexity analysis. Therefore we should all be comfortable with lazy languages so computer science can progress unimpeded.

Read the whole story
faried
2106 days ago
reply
Lost in the Valley of Pleasure
Share this story
Delete

Sometimes, We Need to Make a Better Tool

1 Share

I learned about a couple of cool CLI tools from Nikita Sobolev's Using Better CLIs. hub and tig look like they may be worth a deeper look. This article also reminded me of one of the examples in the blog entry I rmed the other day. It reflects a certain attitude about languages and development.

One of the common complaints about OOP is that what would be a single function in other programming styles usually ends up distributed across multiple classes in an OO program. For example, instead of:

    void draw(Shape s) {
       case s of
          Circle : [code for circle]
          Square : [code for square]
          ...
    }
the code for the individual shapes ends up in the classes for Circle, Square, and so on. If you have to change the drawing code for all of the shapes, you have to track down all of the classes and modify them individually.

This is true, and it is a serious issue. We can debate the relative benefits and costs of the different designs, of course, but we might also think about ways that our development tools can help us.

As a grad student in the early 1990s, I worked in a research group that used VisualWorks Smalltalk to build all of its software. Even within a single Smalltalk image, we faced this code-all-over-the-place problem. We were adding methods to classes and modifying methods all the time as part of our research. We spent a fair amount of time navigating from class to class to work on the individual methods.

Eventually, one of my fellow students had an epiphany: we could write a new code browser. We would open this browser on a particular class, and the browser would provide a pane listing and all of its subclasses, and all of their subclasses. When we selected a method in the root class, the browser enabled us to click on any of the subclasses, see the code for the subclass's corresponding method, and edit it there. If the class didn't have an overriding method, we could add one in the empty pane, with the method signature supplied by the browser.

This browser didn't solve all of the problems we had learning to manage a large code base spread out over many classes, but it was a huge win for dealing with the specific issue of an algorithm being distributed across several kinds of object. It also taught me two things:

  • to appreciate the level of control that Smalltalk gave developers to inspect code and shape the development experience
  • to appreciate the mindset that creating new tools is the way to mitigate many problems in software development, if not to solve them completely

The tool-making mindset is one that I came to appreciate and understand more and more as the years past. I'm disappointed whenever I don't put it to good use, but oftentimes I wise up and make the tools I need to help me do my work.

Read the whole story
faried
2360 days ago
reply
Lost in the Valley of Pleasure
Share this story
Delete

Traveling On The Cheap

1 Share

People always seem really surprised when I tell them how cheaply I’m flying to Europe from the USA for. I guess I should write down a few tricks which I recently learned so I don’t have to keep explaining them over and over. I could write this in the format of a clickbait listicle but I’m not going to do that because I’m not a greasy buzzfeed writer.

Fly Norwegian (.no) via Oslo and Stockholm

There are flights from Oakland, CA (OAK) to Oslo and Stockholm via Norwegian air, nonstop (10 hours) for about $250. That’s on the English version of the site. If you look at the Norwegian version the flights are about $100 cheaper in Norwegian Krøner, if you compare the same flights side-by-side. Which I did just to be sure. From OSL or ARN you can fly to anywhere in Europe for like $30. I did this recently to Warsaw one way, for about $180. There are even flights from NYC to OSL for 1000NOK ($118 USD) coming up later this year.  Screen Shot 2017-06-06 at 10.49.01

Fly via Reykjavik on Wow Air

Flights on Wow Air can be absurdly cheap if you look for their deals. They get snatched up fast though so you can’t just pick a random date when you feel like going a few weeks out and expect sweet deals. This is true in general and especially true with Wow. They announced a deal for flights via KEF (Reykjavik, Iceland) to various European major cities from SF and other US cities for $55. Though if you want more than one bag, even carry-on, they tack on another $50. Fine, so I’m flying SFO-AMS for $113.

Untitled 2Untitled

You can find other awesome deals if you wanna browse around on Google FlightsSkyscanner and HackTheFlight. Rome2Rio is a handy site for finding buses, trains and flights within Europe. I’m sure there are plenty of other tools people use to find super cheap flights, so comment if you know of any!

Thanks to Karl the Nord for basically all of this info.


Read the whole story
faried
2501 days ago
reply
Lost in the Valley of Pleasure
Share this story
Delete
Next Page of Stories