Sam's news

Here are some of the news sources I follow.

My main website is at Enterprise plans, now official!

Published 23 Mar 2017 by Pierrick Le Gall in The Blog.

In the shadow of the standard plan for several years and yet already adopted by more than 50 organizations, it is time to officially introduce the Enterprise plans. They were designed for organizations, private or public, looking for a simple, affordable and yet complete tool to manage their collection of photos.

The main idea behind Enterprise is to democratize photo library management for organizations of all kind and size. We are not targeting fortune 500, although some of them are already clients, but fortune 5,000,000 companies! Enterprise plans can replace, at a reasonable cost, inadequate solutions relying on intranet shared folders, where photos are sometimes duplicated, deleted by mistake, without the appropriate permission system.

Introduction to Enterprise plans

Introduction to Enterprise plans

Why announcing officially these plans today? Because the current trend obviously shows us that our Enterprise plans find its market. Although semi-official, Enterprise plans represented nearly 40% of our revenue in February 2017! It is time to put these plans under the spotlights.

In practice, here is what changes with the Enterprise plans:

  1. they can be used by organizations, as opposed to the standard plan
  2. additional features, such as support for non-photo files (PDF, videos …)
  3. higher level of service (priority support, customization, presentation session)

Discover Entreprise

Mediawiki 1.29: How to automatically unescape characters in API output?

Published 22 Mar 2017 by user1258361 in Newest questions tagged mediawiki - Stack Overflow.

MW version: 1.29.0-wmf.16 (c8b12bf)

Example API call: api.php?action=query&prop=revisions&titles=My_Page&rvprop=content&format=json

The output contains escape characters such as \n. This didn't happen in previous versions. Where's the option to automatically unescape these?


Published 22 Mar 2017 by fabpot in Tags from Twig.


Published 22 Mar 2017 by fabpot in Tags from Twig.

MediaWiki logout via api

Published 20 Mar 2017 by flappix in Newest questions tagged mediawiki - Stack Overflow.

After I request http://localhost/mediawiki-1.28.0/api.php?action=logout from my browser, I'm logged out.

If I perform the same request (exactly same header) via curl, I just receive a lot of strange chars but I'm still logged in and my Wiki-Session has changed.

The request look like this:

GET /mediawiki-1.28.0/api.php?action=logout HTTP/1.1
Host: localhost
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8
Cookie: my_wiki_session=g1l7n0qrhdssegbpjmhu7m3solsbgbkm; my_wikiUserID=1; my_wikiUserName=Admin; my_wikiToken=c61ad94d0d0e4e3008af84f9adf727cb

Any ideas?

Please Help Us Track Down Apple II Collections

Published 20 Mar 2017 by Jason Scott in ASCII by Jason Scott.

Please spread this as far as possible – I want to reach folks who are far outside the usual channels.

The Summary: Conditions are very, very good right now for easy, top-quality, final ingestion of original commercial Apple II Software and if you know people sitting on a pile of it or even if you have a small handful of boxes, please get in touch with me to arrange the disks to be imaged. 

The rest of this entry says this in much longer, hopefully compelling fashion.

We are in a golden age for Apple II history capture.

For now, and it won’t last (because nothing lasts), an incredible amount of interest and effort and tools are all focused on acquiring Apple II software, especially educational and engineering software, and ensuring it lasts another generation and beyond.

I’d like to take advantage of that, and I’d like your help.

Here’s the secret about Apple II software: Copy Protection Works.

Copy protection, that method of messing up easy copying from floppy disks, turns out to have been very effective at doing what it is meant to do – slow down the duplication of materials so a few sales can eke by. For anything but the most compelling, most universally interesting software, copy protection did a very good job of ensuring that only the approved disks that went out the door are the remaining extant copies for a vast majority of titles.

As programmers and publishers laid logic bombs and coding traps and took the brilliance of watchmakers and used it to design alternative operating systems, they did so to ensure people wouldn’t take the time to actually make the effort to capture every single bit off the drive and do the intense and exacting work to make it easy to spread in a reproducible fashion.

They were right.

So, obviously it wasn’t 100% effective at stopping people from making copies of programs, or so many people who used the Apple II wouldn’t remember the games they played at school or at user-groups or downloaded from AE Lines and BBSes, with pirate group greetings and modified graphics.

What happened is that pirates and crackers did what was needed to break enough of the protection on high-demand programs (games, productivity) to make them work. They used special hardware modifications to “snapshot” memory and pull out a program. They traced the booting of the program by stepping through its code and then snipped out the clever tripwires that freaked out if something wasn’t right. They tied it up into a bow so that instead of a horrendous 140 kilobyte floppy, you could have a small 15 or 20 kilobyte program instead. They even put multiple cracked programs together on one disk so you could get a bunch of cool programs at once.

I have an entire section of TEXTFILES.COM dedicated to this art and craft.

And one could definitely argue that the programs (at least the popular ones) were “saved”. They persisted, they spread, they still exist in various forms.

And oh, the crack screens!

I love the crack screens, and put up a massive pile of them here. Let’s be clear about that – they’re a wonderful, special thing and the amount of love and effort that went into them (especially on the Commodore 64 platform) drove an art form (demoscene) that I really love and which still thrives to this day.

But these aren’t the original programs and disks, and in some cases, not the originals by a long shot. What people remember booting in the 1980s were often distant cousins to the floppies that were distributed inside the boxes, with the custom labels and the nice manuals.


On the left is the title screen for Sabotage. It’s a little clunky and weird, but it’s also something almost nobody who played Sabotage back in the day ever saw; they only saw the instructions screen on the right. The reason for this is that there were two files on the disk, one for starting the title screen and then the game, and the other was the game. Whoever cracked it long ago only did the game file, leaving the rest as one might leave the shell of a nut.

I don’t think it’s terrible these exist! They’re art and history in their own right.

However… the mistake, which I completely understand making, is to see programs and versions of old Apple II software up on the Archive and say “It’s handled, we’re done here.” You might be someone with a small stack of Apple II software, newly acquired or decades old, and think you don’t have anything to contribute.

That’d be a huge error.

It’s a bad assumption because there’s a chance the original versions of these programs, unseen since they were sold, is sitting in your hands. It’s a version different than the one everyone thinks is “the” version. It’s precious, it’s rare, and it’s facing the darkness.

There is incredibly good news, however.

I’ve mentioned some of these folks before, but there is now a powerful allegiance of very talented developers and enthusiasts who have been pouring an enormous amount of skills into the preservation of Apple II software. You can debate if this is the best use of their (considerable) skills, but here we are.

They have been acquiring original commercial Apple II software from a variety of sources, including auctions, private collectors, and luck. They’ve been duplicating the originals on a bits level, then going in and “silent cracking” the software so that it can be played on an emulator or via the web emulation system I’ve been so hot on, and not have any change in operation, except for not failing due to copy protection.

With a “silent crack”, you don’t take the credit, you don’t make it about yourself – you just make it work, and work entirely like it did, without yanking out pieces of the code and program to make it smaller for transfer or to get rid of a section you don’t understand.

Most prominent of these is 4AM, who I have written about before. But there are others, and they’re all working together at the moment.

These folks, these modern engineering-minded crackers, are really good. Really, really good.

They’ve been developing tools from the ground up that are focused on silent cracks, of optimizing the process, of allowing dozens, sometimes hundreds of floppies to be evaluated automatically and reducing the workload. And they’re fast about it, especially when dealing with a particularly tough problem.

Take, for example, the efforts required to crack Pinball Construction Set, and marvel not just that it was done, but that a generous and open-minded article was written explaining exactly what was being done to achieve this.

This group can be handed a stack of floppies, image them, evaluate them, and find which have not yet been preserved in this fashion.

But there’s only one problem: They are starting to run out of floppies.

I should be clear that there’s plenty left in the current stack – hundreds of floppies are being processed. But I also have seen the effort chug along and we’ve been going through direct piles, then piles of friends, and then piles of friends of friends. We’ve had a few folks from outside the community bring stuff in, but those are way more scarce than they should be.

I’m working with a theory, you see.

My theory is that there are large collections of Apple II software out there. Maybe someone’s dad had a store long ago. Maybe someone took in boxes of programs over the years and they’re in the basement or attic. I think these folks are living outside the realm of the “Apple II Community” that currently exists (and which is a wonderful set of people, be clear). I’m talking about the difference between a fan club for surfboards and someone who has a massive set of surfboards because his dad used to run a shop and they’re all out in the barn.

A lot of what I do is put groups of people together and then step back to let the magic happen. This is a case where this amazingly talented group of people are currently a well-oiled machine – they help each other out, they are innovating along this line, and Apple II software is being captured in a world-class fashion, with no filtering being done because it’s some hot ware that everyone wants to play.

For example, piles and piles of educational software has returned from potential oblivion, because it’s about the preservation, not the title. Wonderfully done works are being brought back to life and are playable on the Internet Archive.

So like I said above, the message is this:

Conditions are very, very good right now for easy, top-quality, final ingestion of original commercial Apple II Software and if you know people sitting on a pile of it or even if you have a small handful of boxes, please get in touch with me to arrange the disks to be imaged.

I’ll go on podcasts or do interviews, or chat with folks on the phone, or trade lots of e-mails discussing details. This is a very special time, and I feel the moment to act is now. Alliances and communities like these do not last forever, and we’re in a peak moment of talent and technical landscape to really make a dent in what are likely acres of unpreserved titles.

It’s 4am and nearly morning for Apple II software.

It’d be nice to get it all before we wake up.


What Silvio Berlusconi tells us about "post-truth" politics

Published 20 Mar 2017 by in New Humanist Articles and Posts.

Donald Trump’s populism has a striking precedent. So what can we learn from the Berlusconi era?

Nature in China

Published 20 Mar 2017 by Tom Wilson in tom m wilson.

The sun sets in south-east Yunnan province, over karst mountains and lakes, not far from the border with Vietnam. Last weekend I went to Puzheihei, an area of karst mountains surrounded by water lilly-filled lakes 270kms south-east of Kunming. What used to be a five hour bus journey now just takes 1.5 hours on the […]

Managing images on an open wiki platform

Published 19 Mar 2017 by Oliver K in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I'm developing a wiki page using MediaWiki and there are a few ways of inplementing images into wiki pages such as uploading them on the website and uploading them on external websites it potentially banning and requesting others to place an image.

Surely images may be difficult to manage as one day someone may upload a vulgar image and many people will then see it. How can I ensure vulgar images do not get through and that administrators aren't scarred for life after monitoring them?

Returning (again) to WordPress

Published 19 Mar 2017 by Sam Wilson in Sam's notebook.

Every few years I try to move my blog away from WordPress. I tried again earlier this year, but here I am back in WordPress before even a month has gone by! Basically, nothing is as conducive to writing for the web.

I love MediaWiki (which is what I shifted to this time; last time around it was Dokuwiki and for a brief period last year it was a wrapper for Pandoc that I’m calling markdownsite; there have been other systems too) but wikis really are general-purpose co-writing platforms, best for multiple users working on text that needs to be revised forever. Not random mutterings of that no one will ever read, let alone particularly need to edit on an on-going basis.

So WordPress it is, and it’s leading me to consider the various ‘streams’ of words that I use daily: email, photography, journal, calendar, and blog (I’ll not get into the horrendous topic of chat platforms). In the context of those streams, WordPress excels. So I’ll try it again, I think.

MediaWiki Upload Fails

Published 18 Mar 2017 by VikingGoat in Newest questions tagged mediawiki - Stack Overflow.

I'm running an locally hosted MediaWiki, but every time I try to import the XML backup/download of my web-live MediaWiki via the Special:Import I keep getting the error Import failed: Loss of session data. You might have been logged out. <strong>Please verify that you're still logged in and try again</strong>. If it still does not work, try [[Special:UserLogout|logging out]] and logging back in, and check that your browser allows cookies from this site.

The suggested steps there do nothing. Any help is would be fantastic.

Does the composer software have a command like python -m compileall ./

Published 18 Mar 2017 by jehovahsays in Newest questions tagged mediawiki - Server Fault.

I want to use composer for a mediawiki root folder with multiple directories
that need composer to install their dependencies
with a command like composer -m installall ./
For example , if the root folder was all written in python
i could use the command python -m compileall ./

Place a table in an Ordered List in Mediawiki

Published 18 Mar 2017 by Brian Grinter in Newest questions tagged mediawiki - Stack Overflow.

I'm trying to place a table inline in an ordered list in Mediawiki however it breaks my numbering. What I want is

1. A1
1.1 B1
1.2 B2
my table
2. A2

but what I get is

1. A1
1.1 B1
1.2 B2
my table
1. A2

Basic markup I'm using is;

# A1
## B1
## B2
my table

Anyone had any experience with this?

Thanks in advance


install mediawiki on nginx server ubuntu 16 + php 7.0

Published 18 Mar 2017 by Stefan0309 in Newest questions tagged mediawiki - Stack Overflow.

I can't see home page from mediawiki folder.

When I go to (I added in etc/hosts pointer for local-host

I successfully setup server and I can see clearly "Welcome to nginx" page, but now when I want to add to my /var/www/mediawiki/html all content from downloaded folder "mediawiki 1.27" I just get nginx page and not mediawiki page.

here is my setup of server: nginx.conf

user www-data;
worker_processes auto;
pid /run/;

events {
worker_connections 768;
# multi_accept on;

http {

# Basic Settings

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;

server_names_hash_bucket_size 64;
# server_name_in_redirect off;

include /etc/nginx/mime.types;
default_type application/octet-stream;

# SSL Settings

ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;

# Logging Settings

access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

# Gzip Settings

gzip on;
gzip_disable "msie6";

# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript    text/xml application/xml application/xml+rss text/javascript;

# Virtual Host Configs

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;

#mail {
#   # See sample authentication script at:
#   #
#   # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#   server {
#       listen     localhost:110;
#       protocol   pop3;
#       proxy      on;
#   }
#   server {
#       listen     localhost:143;
#       protocol   imap;
#       proxy      on;
#   }

here is my mediawiki from /etc/nginx/sites-available (I've added ln mapping to sites-enabled also)

    server {
    listen 80;
    listen [::]:80;

    root /var/www/mediawiki/html;
    index index.html index.htm index.nginx-debian.html;


location / {
    try_files $uri $uri/ @rewrite;

location @rewrite {
    rewrite ^/(.*)$ /index.php?title=$1&$args;

location ^~ /maintenance/ {
    return 403;

location ~ \.php$ {
    include fastcgi_params;
    fastcgi_pass unix:/tmp/phpfpm.sock;

location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
    try_files $uri /index.php;
    expires max;
    log_not_found off;

location = /_.gif {
    expires max;

location ^~ /cache/ {
    deny all;

location /dumps {
    root /var/www/mediawiki/local;
    autoindex on;

I found some tutorial on official nginx site, but I coukldnt find LocalSettings.php to change $wgUsePathInfo to TRUE. Maybe this is a reason why I cant load index page from mediawiki?

P.S. sudo nginx -t is OK.

Hilton Harvest Earth Hour Picnic and Concert

Published 18 Mar 2017 by Dave Robertson in Dave Robertson.


Sandpapering Screenshots

Published 15 Mar 2017 by Jason Scott in ASCII by Jason Scott.

The collection I talked about yesterday was subjected to the Screen Shotgun, which does a really good job of playing the items, capturing screenshots, and uploading them into the item to allow people to easily see, visually, what they’re in for if they boot them up.

In general, the screen shotgun does the job well, but not perfectly. It doesn’t understand what it’s looking at, at all, and the method I use to decide the “canonical” screenshot is inherently shallow – I choose the largest filesize, because that tends to be the most “interesting”.

The bug in this is that if you have, say, these three screenshots:

…it’s going to choose the first one, because those middle-of-loading graphics for an animated title screen have tons of little artifacts, and the filesize is bigger. Additionally, the second is fine, but it’s not the “title”, the recognized “welcome to this program” image. So the best choice turns out to be the third.

I don’t know why I’d not done this sooner, but while waiting for 500 disks to screenshot, I finally wrote a program to show me all the screenshots taken for an item, and declare a replacement canonical title screenshot. The results have been way too much fun.

It turns out, doing this for Apple II programs in particular, where it’s removed the duplicates and is just showing you a gallery, is beautiful:

Again, the all-text “loading screen” in the middle, which is caused by blowing program data into screen memory, wins the “largest file” contest, but literally any other of the screens would be more appropriate.

This is happening all over the place: crack screens win over the actual main screen, the mid-loading noise of Apple II programs win over the final clean image, and so on.

Working with tens of thousands of software programs, primarily alone, means that I’m trying to find automation wherever I can. I can’t personally boot up each program and do the work needed to screenshot/describe it – if a machine can do anything, I’ll make the machine do it. People will come to me with fixes or changes if the results are particularly ugly, but it does leave a small amount that no amount of automation is likely to catch.

If you watch a show or documentary on factory setups and assembly lines, you’ll notice they can’t quite get rid of people along the entire line, especially the sign-off. Someone has to keep an eye to make sure it’s not going all wrong, or, even more interestingly, a table will come off the line and you see one person giving it a quick run-over with sandpaper, just to pare down the imperfections or missed spots of the machine. You still did an enormous amount of work with no human effort, but if you think that’s ready for the world with no final sign-off, you’re kidding yourself.

So while it does mean another hour or two looking at a few hundred screenshots, it’s nice to know I haven’t completely automated away the pleasure of seeing some vintage computer art, for my work, and for the joy of it.

More Ways to Work with Load Balancers

Published 15 Mar 2017 by DigitalOcean in DigitalOcean Blog.

When building new products at DigitalOcean, one of our goals is to ensure that they're simple to use and developer friendly. And that goes beyond the control panel; we aim to provide intuitive APIs and tools for each of our products. Since the release of Load Balancers last month, we've worked to incorporate them into our API client libraries and command line client. We've also seen community-supported open source projects extended to support Load Balancers.

Today, we want to share several new ways you can interact with Load Balancers.

Command Line: doctl

doctl is our easy-to-use, official command line client. Load Balancer support landed in version v1.6.0. You can download the release from GitHub or install it using Homebrew on Mac:

brew install doctl

You can use doctl for anything you can do in our control panel. For example, here's how you would create a Load Balancer:

doctl compute load-balancer create --name "example-01" \
    --region "nyc3" --tag-name "web:prod" \
    --algorithm "round_robin" \
    --forwarding-rules \

Find doctl's full documentation in this DigitalOcean tutorial.

Go: godo

We're big fans of Go, and godo is the way to interact with DigitalOcean using Go. Load Balancer support is included in the recently tagged v1.0.0 release. Here's an example:

createRequest := &godo.LoadBalancerRequest{
    Name:      "example-01",
    Algorithm: "round_robin",
    Region:    "nyc3",
    ForwardingRules: []godo.ForwardingRule{
            EntryProtocol:  "http",
            EntryPort:      80,
            TargetProtocol: "http",
            TargetPort:     80,
    HealthCheck: &godo.HealthCheck{
        Protocol:               "http",
        Port:                   80,
        Path:                   "/",
        CheckIntervalSeconds:   10,
        ResponseTimeoutSeconds: 5,
        HealthyThreshold:       5,
        UnhealthyThreshold:     3,
    StickySessions: &godo.StickySessions{
        Type: "none",
    Tag:                 "web:prod",
    RedirectHttpToHttps: false,

lb, _, err := client.LoadBalancers.Create(ctx, createRequest)

The library's full documentation is available on GoDoc.

Ruby: droplet_kit

droplet_kit is our Ruby API client library. Version 2.1.0 has Load Balancer support and is now available on Rubygems. You can install it with this command:

gem install droplet_kit

And you can create a new Load Balancer like so:

load_balancer =
  name: 'example-lb-001',
  algorithm: 'round_robin',
  tag: 'web:prod',
  redirect_http_to_https: true,
  region: 'nyc3',
  forwarding_rules: [
      entry_protocol: 'http',
      entry_port: 80,
      target_protocol: 'http',
      target_port: 80,
      certificate_id: '',
      tls_passthrough: false
    type: 'none',
    cookie_name: '',
    cookie_ttl_seconds: nil
    protocol: 'http',
    port: 80,
    path: '/',
    check_interval_seconds: 10,
    response_timeout_seconds: 5,
    healthy_threshold: 5,
    unhealthy_threshold: 3


Community Supported

Besides our official open source projects, there are two community contributions we'd like to highlight:

Thanks to our colleagues Viola and Andrew for working on these features, and the open source community for including Load Balancer support in their projects. In particular, we want to give a special shout out to Paul Stack and the rest of our friends at HashiCorp who added support to Terraform so quickly. You rock!

We're excited to see more tools add Load Balancer support. If you're the maintainer of a project that has added support, Tweet us @digitalocean. We can help spread the word!

Rafael Rosa
Product Manager, High Availability

Thoughts on a Collection: Apple II Floppies in the Realm of the Now

Published 15 Mar 2017 by Jason Scott in ASCII by Jason Scott.

I was connected with The 3D0G Knight, a long-retired Apple II pirate/collector who had built up a set of hundreds of floppy disks acquired from many different locations and friends decades ago. He generously sent me his entire collection to ingest into a more modern digital format, as well as the Internet Archive’s software archive.

The floppies came in a box without any sort of sleeves for them, with what turned out to be roughly 350 of them removed from “ammo boxes” by 3D0G from his parents’ house. The disks all had labels of some sort, and a printed index came along with it all, mapped to the unique disk ID/Numbers that had been carefully put on all of them years ago. I expect this was months of work at the time.

Each floppy is 140k of data on each side, and in this case, all the floppies had been single-sided and clipped with an additional notch with a hole punch to allow the second side to be used as well.

Even though they’re packed a little strangely, there was no damage anywhere, nothing bent or broken or ripped, and all the items were intact. It looked to be quite the bonanza of potentially new vintage software.

So, this activity at the crux of the work going on with both the older software on the Internet Archive, as well as what I’m doing with web browser emulation and increasing easy access to the works of old. The most important thing, over everything else, is to close the air gap – get the data off these disappearing floppy disks and into something online where people or scripts can benefit from them and research them. Almost everything else – scanning of cover art, ingestion of metadata, pulling together the history of a company or cross-checking what titles had which collaborators… that has nowhere near the expiration date of the magnetized coated plastic disks going under. This needs us and it needs us now.

The way that things currently work with Apple II floppies is to separate them into two classes: Disks that Just Copy, and Disks That Need A Little Love. The Little Love disks, when found, are packed up and sent off to one of my collaborators, 4AM, who has the tools and the skills to get data of particularly tenacious floppies, as well as doing “silent cracks” of commercial floppies to preserve what’s on them as best as possible.

Doing the “Disks that Just Copy” is a mite easier. I currently have an Apple II system on my desk that connects via USB-to-serial connection to my PC. There, I run a program called Apple Disk Transfer that basically turns the Apple into a Floppy Reading Machine, with pretty interface and everything.

Apple Disk Transfer (ADT) has been around a very long time and knows what it’s doing – a floppy disk with no trickery on the encoding side can be ripped out and transferred to a “.DSK” file on the PC in about 20 seconds. If there’s something wrong with the disk in terms of being an easy read, ADT is very loud about it. I can do other things while reading floppies, and I end up with a whole pile of filenames when it’s done. The workflow, in other words, isn’t so bad as long as the floppies aren’t in really bad shape. In this particular set, the floppies were in excellent shape, except when they weren’t, and the vast majority fell into the “excellent” camp.

The floppy drive that sits at the middle of this looks like some sort of nightmare, but it helps to understand that with Apple II floppy drives, you really have to have the cover removed at all time, because you will be constantly checking the read head for dust, smudges, and so on. Unscrewing the whole mess and putting it back together for looks just doesn’t scale. It’s ugly, but it works.

It took me about three days (while doing lots of other stuff) but in the end I had 714 .dsk images pulled from both sides of the floppies, which works out to 357 floppy disks successfully imaged. Another 20 or so are going to get a once over but probably are going to go into 4am’s hands to get final evaluation. (Some of them may in fact be blank, but were labelled in preparation, and so on.) 714 is a lot to get from one person!

As mentioned, an Apple II 5.25″ floppy disk image is pretty much always 140k. The names of the floppy are mine, taken off the label, or added based on glancing inside the disk image after it’s done. For a quick glance, I use either an Apple II emulator called Applewin, or the fantastically useful Apple II disk image investigator Ciderpress, which is a frankly the gold standard for what should be out there for every vintage disk/cartridge/cassette image. As might be expected, labels don’t always match contents. C’est la vie.

As for the contents of the disks themselves; this comes down to what the “standard collection” was for an Apple II user in the 1980s who wasn’t afraid to let their software library grow utilizing less than legitimate circumstances. Instead of an elegant case of shiny, professionally labelled floppy diskettes, we get a scribbled, messy, organic collection of all range of “warez” with no real theme. There’s games, of course, but there’s also productivity, utilities, artwork, and one-off collections of textfiles and documentation. Games that were “cracked” down into single-file payloads find themselves with 4-5 other unexpected housemates and sitting behind a menu. A person spending the equivalent of $50-$70 per title might be expected to have a relatively small and distinct library, but someone who is meeting up with friends or associates and duplicating floppies over a few hours will just grab bushels of strange.

The result of the first run is already up on the Archive: A 37 Megabyte .ZIP file containing all the images I pulled off the floppies. 

In terms of what will be of relevance to later historians, researchers, or collectors, that zip file is probably the best way to go – it’s not munged up with the needs of the Archive’s structure, and is just the disk images and nothing else.

This single .zip archive might be sufficient for a lot of sites (go git ‘er!) but as mentioned infinite times before, there is a very strong ethic across the Internet Archive’s software collection to make things as accessible as possible, and hence there are over nearly 500 items in the “3D0G Knight Collection” besides the “download it all” item.

The rest of this entry talks about why it’s 500 and not 714, and how it is put together, and the rest of my thoughts on this whole endeavor. If you just want to play some games online or pull a 37mb file and run, cackling happily, into the night, so be it.

The relatively small number of people who have exceedingly hard opinions on how things “should be done” in the vintage computing space will also want to join the folks who are pulling the 37mb file. Everything else done by me after the generation of the .zip file is in service of the present and near future. The items that number in the hundreds on the Archive that contain one floppy disk image and interaction with it are meant for people to find now. I want someone to have a vague memory of a game or program once interacted with, and if possible, to find it on the Archive. I also like people browsing around randomly until something catches their eye and to be able to leap into the program immediately.

To those ends, and as an exercise, I’ve acquired or collaborated on scripts to do the lion’s share of analysis on software images to prep them for this living museum. These scripts get it “mostly” right, and the rough edges they bring in from running are easily smoothed over by a microscopic amount of post-processing manual attention, like running a piece of sandpaper over a machine-made joint.

Again, we started out 714 disk images. The first thing done was to run them against a script that has hash checksums for every exposed Apple II disk image on the Archive, which now number over 10,000. Doing this dropped the “uniquely new” disk images from 714 to 667.

Next, I concatenated disk images that are part of the same product into one item: if a paint program has two floppy disk images for each of the sides of its disk, those become a single item. In one or two cases, the program spans multiple floppies, so 4-8 (and in one case, 14!) floppy images become a single item. Doing this dropped the total from 667 to 495 unique items. That’s why the number is significantly smaller than the original total.

Let’s talk for a moment about this.

Using hashes and comparing them is the roughest of rough approaches to de-duplicating software items. I do it with Apple II images because they tend to be self contained (a single .dsk file) and because Apple II software has a lot of people involved in it. I’m not alone by any means in acquiring these materials and I’m certainly not alone in terms of work being done to track down all the unique variations and most obscure and nearly lost packages written for this platform. If I was the only person in the world (or one of a tiny sliver) working on this I might be super careful with each and every item to catalog it – but I’m absolutely not; I count at least a half-dozen operations involving in Apple II floppy image ingestion.

And as a bonus, it’s a really nice platform. When someone puts their heart into an Apple II program, it rewards them and the end user as well – the graphics can be charming, the program flow intuitive, and the whole package just gleams on the screen. It’s rewarding to work with this corpus, so I’m using it as a test bed for all these methods, including using hashes.

But hash checksums are seriously not the be-all for this work. Anything can make a hash different – an added file, a modified bit, or a compilation of already-on-the-archive-in-a-hundred-places files that just happen to be grouped up slightly different than others. That said, it’s not overwhelming – you can read about what’s on a floppy and decide what you want pretty quickly; gigabytes will not be lost and the work to track down every single unique file has potential but isn’t necessary yet.

(For the people who care, the Internet Archive generates three different hashes (md5, crc32, sha1) and lists the size of the file – looking across all of those for comparison is pretty good for ensuring you probably have something new and unique.)

Once the items are up there, the Screen Shotgun whips into action. It plays the programs in the emulator, takes screenshots, leafs off the unique ones, and then assembles it all into a nice package. Again, not perfect but left alone, it does the work with no human intervention and gets things generally right. If you see a screenshot in this collection, a robot did it and I had nothing to do with it.

This leads, of course, to scaring out which programs are a tad not-bootable, and by that I mean that they boot up in the emulator and the emulator sees them and all, but the result is not that satisfying:

On a pure accuracy level, this is doing exactly what it’s supposed to – the disk wasn’t ever a properly packaged, self-contained item, and it needs a boot disk to go in the machine first before you swap the floppy. I intend to work with volunteers to help with this problem, but here is where it stands.

The solution in the meantime is a java program modified by Kevin Savetz, which analyzes the floppy disk image and prints all the disk information it can find, including the contents of BASIC programs and textfiles. Here’s a non-booting disk where this worked out. The result is that this all gets ingested into the search engine of the Archive, and so if you’re looking for a file within the disk images, there’s a chance you’ll be able to find it.

Once the robots have their way with all the items, I can go in and fix a few things, like screenshots that went south, or descriptions and titles that don’t reflect what actually boots up. The amount of work I, a single person, have to do is therefore reduced to something manageable.

I think this all works well enough for the contemporary vintage software researcher and end user. Perhaps that opinion is not universal.

What I can say, however, is that the core action here – of taking data away from a transient and at-risk storage medium and putting it into a slightly less transient, less at-risk storage medium – is 99% of the battle. To have the will to do it, to connect with the people who have these items around and to show them it’ll be painless for them, and to just take the time to shove floppies into a drive and read them, hundreds of times… that’s the huge mountain to climb right now. I no longer have particularly deep concerns about technology failing to work with these digital images, once they’re absorbed into the Internet. It’s this current time, out in the cold, unknown and unloved, that they’re the most at risk.

The rest, I’m going to say, is gravy.

I’ll talk more about exactly how tasty and real that gravy is in the future, but for now, please take a pleasant walk in the 3D0G Knight’s Domain.

Will progress kill humanism?

Published 15 Mar 2017 by in New Humanist Articles and Posts.

Yuval Noah Harari’s new book explores the idea that scientific knowledge might one day undermine democratic values.

The Followup

Published 14 Mar 2017 by Jason Scott in ASCII by Jason Scott.

Writing about my heart attack garnered some attention. I figured it was only right to fill in later details and describe what my current future plans are.

After the previous entry, I went back into the emergency room of the hospital I was treated at, twice.

The first time was because I “felt funny”; I just had no grip on “is this the new normal” and so just to understand that, I went back in and got some tests. They did an EKG, a blood test, and let me know all my stats were fine and I was healing according to schedule. That took a lot of stress away.

Two days later, I went in because I was having a marked shortness of breath, where I could not get enough oxygen in and it felt a little like I was drowning. Another round of tests, and one of the cardiologists mentioned a side effect of one of the drugs I was taking was this sort of shortness/drowning. He said it usually went away and the company claimed 5-7% of people got this side effect, but that they observed more like 10-15%. They said I could wait it out or swap drugs. I chose swap. After that, I’ve had no other episodes.

The hospital thought I should stay in Australia for 2 weeks before flying. Thanks to generosity from both MuseumNext and the ACMI, my hosts, that extra AirBnB time was basically paid for. MuseumNext also worked to help move my international flight ahead the weeks needed; a very kind gesture.

Kind gestures abounded, to be clear. My friend Rochelle extended her stay from New Zealand to stay an extra week; Rachel extended hers to match my new departure date. Folks rounded up funds and sent them along, which helped cover some additional costs. Visitors stopped by the AirBnB when I wasn’t really taking any walks outside, to provide additional social contact.

Here is what the blockage looked like, before and after. As I said, roughly a quarter of my heart wasn’t getting any significant blood and somehow I pushed through it for nearly a week. The insertion of a balloon and then a metal stent opened the artery enough for the blood flow to return. Multiple times, people made it very clear that this could have finished me off handily, and mostly luck involving how my body reacted was what kept me going and got me in under the wire.

From the responses to the first entry, it appears that a lot of people didn’t know heart attacks could be a lingering, growing issue and not just a bolt of lightning that strikes in the middle of a show or while walking down the street. If nothing else, I’m glad that it’s caused a number of people to be aware of how symptoms portray each other, as well as getting people to check up cholesterol, which I didn’t see as a huge danger compared to other factors, and which turned out to be significant indeed.

As for drugs, I’ve got a once a day waterfall of pills for blood pressure, cholesterol, heart healing, anti-clotting, and my long-handled annoyances of gout (which I’ve not had for years thanks to the pills). I’m on some of them for the next few months, some for a year, and some forever. I’ve also been informed I’m officially at risk for another heart attack, but the first heart attack was my hint in that regard.

As I healed, and understood better what was happening to me, I got better remarkably quick. There is a single tiny dot on my wrist from the operation, another tiny dot where the IV was in my arm at other times. Rachel gifted a more complicated Fitbit to replace the one I had, with the new one tracking sleep schedule and heart rate, just to keep an eye on it.

A day after landing back in the US, I saw a cardiologist at Mt. Sinai, one of the top doctors, who gave me some initial reactions to my charts and information: I’m very likely going to be fine, maybe even better than before. I need to take care of myself, and I was. If I was smoking or drinking, I’d have to stop, but since I’ve never had alcohol and I’ve never smoked, I’m already ahead of that game. I enjoy walking, a lot. I stay active. And as of getting out of the hospital, I am vegan for at least a year. Caffeine’s gone. Raw vegetables are in.

One might hesitate putting this all online, because the Internet is spectacularly talented at generating hatred and health advice. People want to help – it comes from a good place. But I’ve got a handle on it and I’m progressing well; someone hitting me up with a nanny-finger-wagging paragraph and 45 links to isn’t going to help much. But go ahead if you must.

I failed to mention it before, but when this was all going down, my crazy family of the Internet Archive jumped in, everyone from Dad Brewster through to all my brothers and sisters scrambling to find me my insurance info and what they had on their cards, as I couldn’t find mine. It was something really late when I first pinged everyone with “something is not good” and everyone has been rather spectacular over there. Then again, they tend to be spectacular, so I sort of let that slip by. Let me rectify that here.

And now, a little bit on health insurance.

I had travel insurance as part of my health insurance with the Archive. That is still being sorted out, but a large deposit had to be put on the Archive’s corporate card as a down-payment during the sorting out, another fantastic generosity, even if it’s technically a loan. I welcome the coming paperwork and nailing down of financial brass tacks for a specific reason:

I am someone who once walked into an emergency room with no insurance (back in 2010), got a blood medication IV, stayed around a few hours, and went home, generating a $20,000 medical bill in the process. It got knocked down to $9k over time, and I ended up being thrown into a low-income program they had that allowed them to write it off (I think). That bill could have destroyed me, financially. Therefore, I’m super sensitive to the costs of medical care.

In Australia, it is looking like the heart operation and the 3 day hospital stay, along with all the tests and staff and medications, are going to round out around $10,000 before the insurance comes in and knocks that down further (I hope). In the US, I can’t imagine that whole thing being less than $100,000.

The biggest culture shock for me was how little any of the medical staff, be they doctors or nurses or administrators, cared about the money. They didn’t have any real info on what things cost, because pretty much everything is free there. I’ve equating it to asking a restaurant where the best toilets to use a few hours after your meal – they might have some random ideas, but nobody’s really thinking that way. It was a huge factor in my returning to the emergency room so willingly; each visit, all-inclusive, was $250 AUD, which is even less in US dollars. $250 is something I’ll gladly pay for peace of mind, and I did, twice. The difference in the experince is remarkable. I realize this is a hot button issue now, but chalk me up as another person for whom a life-changing experience could come within a remarkably close distance of being an influence on where I might live in the future.

Dr. Sonny Palmer, who did insertion of my stent in the operating room.

I had a pile of plans and things to get done (documentaries, software, cutting down on my possessions, and so on), and I’ll be getting back to them. I don’t really have an urge to maintain some sort of health narrative on here, and I certainly am not in the mood to urge any lifestyle changes or preach a way of life to folks. I’ll answer questions if people have them from here on out, but I’d rather be known for something other than powering through a heart attack, and maybe, with some effort, I can do that.

Thanks again to everyone who has been there for me, online and off, in person and far away, over the past few weeks. I’ll try my best to live up to your hopes about what opportunities my second chance at life will give me.


Host django app and mediawiki on same server

Published 13 Mar 2017 by Pallav Gupta in Newest questions tagged mediawiki - Stack Overflow.

I want to host a django app alongside a media wiki on the same server (same ec2 instance) in such a way that my domain points to my django app and a subdomain like points to the media wiki server. I want to do this with nginx as I have deployed my django app using the instructions on this link:

Essentially, I want to add mediawiki for my website and it should be accessible via

How do I go about it?

How do you edit the HTML for MediaWiki's Special:UserLogin page in MW 1.28?

Published 13 Mar 2017 by Josh in Newest questions tagged mediawiki - Stack Overflow.

Previously, there was a class called UserloginTemplate extending BaseTemplate that you were free to copypaste into your own file and use instead in the class loader.

Now, I cannot even find what file this HTML is coming from. I have found includes/specialpage/LoginSignupSpecialPage.php ... but it's Abstract.

abstract class LoginSignupSpecialPage extends AuthManagerSpecialPage {

I have no idea where to begin and any information I can find about this is for older versions of MediaWiki.

On the Red Mud Trail in Yunnan

Published 13 Mar 2017 by Tom Wilson in tom m wilson.

I finally made it to downtown Kunming last weekend.  Amazingly there were still a few of the old buildings standing in the centre (although they were a tiny minority). Walking across Green Lake, a lake in downtown Kunming with various interconnected islands in its centre, I passed through a grove of bamboo trees. Old women […]

Building a dynamic wordpress interface to a mediawiki page

Published 13 Mar 2017 by A. H. Bullen in Newest questions tagged mediawiki - Stack Overflow.

I will apologize in advance; I am obviously missing a concept.

I am using RDP Wiki Embed to build a Wordpress interface to a mediawiki local install. It works beautifully as a plugin. I don't, however, want to build 3000+ Wordpress pages for each page on our mediawiki. I want to be able to feed a GET string to RDP Wiki Embed.

My solution is to build a template that would take the mediawiki page title as an argument and then run RDP Wiki Embed. My code, based on page.php:

    $authorID = $_GET['authorid'];
<div class="wrap">
    <div id="primary" class="content-area">
            <main id="main" class="site-main" role="main">
                    while ( have_posts() ) : the_post();
                            get_template_part( 'template-parts/page/content', 'page' );
                            echo "[rdp-wiki-embed url='" . $authorID . "']";
                    endwhile; // End of the loop.

            </main><!-- #main -->
    </div><!-- #primary -->
</div><!-- .wrap -->

<?php get_footer(); ?>

Obviously, all this code does is echo the page with a shortcode as a literal string, instead of executing the shortcode.

Can someone point me in the correct direction?

EC Web Accessibility Directive Expert Group (WADEX)

Published 13 Mar 2017 by Shadi Abou-Zahra in W3C Blog.

Meeting room

The European Commission (EC) recently launched the  Web Accessibility Directive Expert Group (WADEX). This group has the mission “to advise the Commission in relation to the preparation of delegated acts, and in the early stages of the preparation of implementing acts” in relation to the EU Directive on the accessibility of the websites and mobile applications of public sector bodies.

More specifically, the focus of this group is to advise the EC on the development of:

This relates closely to the development of the W3C Web Content Accessibility Guidelines (WCAG) 2.1, which is expected to provide improvements for mobile accessibility. It also relates to several other W3C resources on web accessibility, including the Website Accessibility Conformance Evaluation Methodology (WCAG-EM) and its Report Generator, as well as Involving Users in Evaluating Web Accessibility.

I am delighted to have been appointed as an expert to the WADEX sub-group, to represent W3C. With this effort I hope we can further improve the harmonization of web accessibility standards and practices across Europe and internationally, also in line with the EC objectives for a single digital market.

Derrida vs. the rationalists, truth in the age of bullshit, and the politics of humanism

Published 13 Mar 2017 by in New Humanist Articles and Posts.

The best long-reads from the New Humanist this month.

The Rojava experiment

Published 13 Mar 2017 by in New Humanist Articles and Posts.

Behind the frontlines in Syria, a self-governing Kurdish region is making a radical attempt at gender equality.

Block Storage Comes to Singapore; Five More Datacenters on the Way!

Published 12 Mar 2017 by DigitalOcean in DigitalOcean Blog.

Today, we're excited to share that Block Storage is available to all Droplets in our Singapore region. With Block Storage, you can scale your storage independently of your compute and have more control over how you grow your infrastructure, enabling you to build and scale larger applications more easily. Block Storage has been a key part of our overall focus on strengthening the foundation of our platform to increase performance and enable our customers to scale.

We've seen incredible engagement since our launch last July. Together, you have created more than 95,000 Block Storage volumes in SFO2, NYC1, and FRA1 to scale databases, take backups, store media, and much more; SGP1 is our fourth datacenter with Block Storage and the first in the Asia-Pacific region.

As we continue to upgrade and augment our other datacenters, we'll be ensuring that Block Storage is added too. In order to help you plan your deployments, we've finalized the timelines for the next five regions. Here is the schedule we're targeting for Block Storage rollout in 2017:

We'll have more specific updates to share on SFO1, NYC2, and AMS2 in a future update.

Inside SGP1, our Singapore Datacenter region

Inside SGP1, our Singapore Datacenter region.

Thanks to everyone who has given us feedback and used Block Storage so far. Please keep it coming. You can try creating your first Block Storage volume in Singapore today!

Ben Schaechter
Product Manager, Droplet & Block Storage

Week #8: Warriors are on the right path

Published 12 Mar 2017 by legoktm in The Lego Mirror.

As you might have guessed due to the lack of previous coverage of the Warriors, I'm not really a basketball fan. But the Warriors are in an interesting place right now. After setting an NBA record for being the fastest team to clinch a playoff spot, Coach Kerr has started resting his starters and the Warriors have a three game losing streak. This puts the Warriors in danger of losing their first seed spot with the San Antonio Spurs only half a game behind them.

But I think the Warriors are doing the right thing. Last year the Warriors set the record for having the best regular season record in NBA history, but also became the first team in NBA history to have a 3-1 advantage in the finals and then lose.

No doubt there was immense pressure on the Warriors last year. It was just expected of them to win the championship, there really wasn't anything else.

So this year they can easily avoid a lot of that pressure by not being the best team in the NBA on paper. They shouldn't worry about being the top seed, just seed in the top four, and play your best in the playoffs. Get some rest, they have a huge advantage over every other team simply by already being in the playoffs with so many games left to play.

28th birthday of the Web

Published 12 Mar 2017 by Jeff Jaffe in W3C Blog.

Today, Sunday 12 March, 2017, the W3C celebrates the 28th birthday of the Web.

We are honored to work with our Director, Sir Tim Berners-Lee, and our members to create standards for the Web for All and the Web on Everything.

Under Tim’s continuing leadership, hundreds of member organizations and thousands of engineers world-wide work on our vital mission – Leading the Web to its Full Potential.

For more information on what Tim views as both challenges and hopes for the future, see: “Three challenges for the web, according to its inventor” at the World Wide Web Foundation.

Use email as username in Mediawiki

Published 10 Mar 2017 by Bo Wang in Newest questions tagged mediawiki - Stack Overflow.

I encounter some issues while I want to integrate my mediawiki to my enterprise LDAP server with LDAPExtension. Our LDAP server uses email as uid, so I have to use email as the input username in mediawiki. But while I login with email, after LDAP authentication pass, mediawiki always prompt: "Auto-creation of a local account failed: You have not specified a valid username." , it means the username is invalid. I also tried to create a mediawiki user with email as username, it has same error.

So is it possible to let mediawiki ignore username validation to let email can be a username?

China – Arrival in the Middle Kingdom

Published 9 Mar 2017 by Tom Wilson in tom m wilson.

I’ve arrived in Kunming, the little red dot you can see on the map above.  I’m here to teach research skills to undergraduate students at Yunnan Normal University.  As you can see, I’ve come to a point where the foothills of the Himalayas fold up into a bunch of deep creases.  Yunnan province is the area of […]

Updates 1.2.4 and 1.1.8 released

Published 9 Mar 2017 by Roundcube Webmail Dev Team in Roundcube Webmail Project News.

We just published another update to the both stable versions 1.2 and 1.1 delivering important bug fixes and improvements which we picked from the upstream branch.

Included is a fix for a recently reported security XSS issue with CSS styles inside an SVG tag.

See the full changelog for 1.2.4 in the wiki. And for version 1.1.8 in the release notes.

Both versions are considered stable and we recommend to update all productive installations of Roundcube with either of these versions. Download them from GitHub via

As usual, don’t forget to backup your data before updating!

mediawiki instance served by apache on specific domain

Published 9 Mar 2017 by Miha Jamsek in Newest questions tagged mediawiki - Stack Overflow.

So, I use MediaWiki, hosted by Apache (wiki dir is in /var/www/html/wiki).

I have my domain which is redirected through apache virtual host to port 8080 for some nodejs app.

I have now tried to set up my wiki to be served on

In my DNS records i have both www and wiki domains redirected to same server.

First I tried to make apache virtual host to handle domain request:

<VirtualHost *:80>
    ServerAdmin webmaster@localhost

    DocumentRoot /var/www/html/wiki


However if I tried to connect to or I got the message that the page doesn't exists.

I went to check my wiki's LocalSetting.php and I tried editing this line:

## If i changed value to this, it didn't worked
$wgServer = "";
## If i set value to this, it was working
$wgServer = "http://my-server-IP";

However, if i set wgServer to my server IP, whenever i write, i get redirected to http://my-IP/wiki instead of staying on domain name

I would appreciate the help on how to properly set up my wiki's or apache's settings to properly host my wiki on my domain.

Book review: Lincoln in the Bardo

Published 9 Mar 2017 by in New Humanist Articles and Posts.

The first novel from acclaimed short story writer George Saunders is strange and wise.

MediaWiki API - Get company official websites

Published 9 Mar 2017 by Kiruthika Kumar in Newest questions tagged mediawiki - Stack Overflow.

I would like to get all the company official websites using MediaWiki API . Also it would be more helpful if i can able to filter out based on the country.

Basically mediawiki uses sources of wiki data. They have numerous list of company information which is more reliable. i have tried out some basic queries.

Ex :

Above i have used title property and it's not a good approach .So Is there any possible way to get company details based on some other info rather than title page of wiki ?

If there is any other source to scrape company information , please let me know . :)

Thanks in advance !!

Bug in mediawiki WidgetRenderer.php - deprecated /e in preg_replace

Published 8 Mar 2017 by Dennis Grant in Newest questions tagged mediawiki - Stack Overflow.

So I'm trying to restore a mediawiki install that got munched by an upgrade to Ubuntu 16.04 - long story.

Anyway, this wiki uses a Google Calendar Widget. That needs the Widgets extension.

When including the Widget: code, it throws an error about /e being no longer supported in preg_replace and to use preg_replace_callback instead.

This is the offending code:

public static function processEncodedWidgetOutput( &$out, &$text ) {
                // Find all hidden content and restore to normal
                $text = preg_replace(
                        '/ENCODED_CONTENT ' . self::$mRandomString . '([0-9a-zA-Z\/+]+=*)* END_ENCODED_CONTENT/esm',

                return true;

I think I know what is going on here - the content that matches in the parens is being passed to base64_decode. But I haven't touched PHP in about a decade.

How would this function be rewritten using preg_replace_callback? Or is there a better way to do it?



WWW2017 and W3Cx Webdev contests at Perth’s Festival of the Web

Published 8 Mar 2017 by Marie-Claire Forgue in W3C Blog.

WWW2017 logoWWW2017 is in less than a month! The 26th edition of the annual World Wide Web Conference will be held in Perth, Australia, from 2 to 7 April 2017.

This year again, W3C proposes a W3C track where conference attendees are invited to learn from, meet and discuss with W3C’s members and team experts. During 2 days, on Wednesday 4 and Thursday 5 April, the current state of the art and future developments in Web Accessibility, Web of Things, Spatial Data on the Web and Web privacy will be presented and demonstrated. Many thanks to our members and the W3C Australia Office for making this happen!

logo of the Festival of the Web - Perth 2017

W3C also participates in the Festival of the Web (FoW). The conference organizers have created a bigger event which includes many different events including Web for All (W4A) (and its accessibility hack), co-organized by our colleague Vivienne Conway (Edith Cowan University). FoW’s numerous activities run from 2 to 9 April 2017 all over the city with the people and for the people, bringing together entrepreneurs, academia, industry, government and the Perth community.

And for the attention of Web developers and designers who love to code and have fun, my colleagues and I have designed not one but three #webdev contests – see below for a short description each:

Look for the contests’ long descriptions, with accompanying tips and resources on the W3Cx’s contests page.

The contests are open to anyone and we’ll accept your projects until Friday 6 April (at 23h59 UTC) (see participation rules). The jury members of the competition are Michel Buffa (W3Cx trainer, University Côte d’Azur), Bert Bos (co-inventor of CSS) and myself.

We will deliberate on Friday 7 April 2017 — on site in Perth. Looking forward to meeting you there!

Introducing Similarity Search at Flickr

Published 7 Mar 2017 by Clayton Mellina in

At Flickr, we understand that the value in our image corpus is only unlocked when our members can find photos and photographers that inspire them, so we strive to enable the discovery and appreciation of new photos.

To further that effort, today we are introducing similarity search on Flickr. If you hover over a photo on a search result page, you will reveal a “…” button that exposes a menu that gives you the option to search for photos similar to the photo you are currently viewing.

In many ways, photo search is very different from traditional web or text search. First, the goal of web search is usually to satisfy a particular information need, while with photo search the goal is often one of discovery; as such, it should be delightful as well as functional. We have taken this to heart throughout Flickr. For instance, our color search feature, which allows filtering by color scheme, and our style filters, which allow filtering by styles such as “minimalist” or “patterns,” encourage exploration. Second, in traditional web search, the goal is usually to match documents to a set of keywords in the query. That is, the query is in the same modality—text—as the documents being searched. Photo search usually matches across modalities: text to image. Text querying is a necessary feature of a photo search engine, but, as the saying goes, a picture is worth a thousand words. And beyond saving people the effort of so much typing, many visual concepts genuinely defy accurate description. Now, we’re giving our community a way to easily explore those visual concepts with the “…” button, a feature we call the similarity pivot.

The similarity pivot is a significant addition to the Flickr experience because it offers our community an entirely new way to explore and discover the billions of incredible photos and millions of incredible photographers on Flickr. It allows people to look for images of a particular style, it gives people a view into universal behaviors, and even when it “messes up,” it can force people to look at the unexpected commonalities and oddities of our visual world with a fresh perspective.

What is “similarity”?

To understand how an experience like this is powered, we first need to understand what we mean by “similarity.” There are many ways photos can be similar to one another. Consider some examples.

It is apparent that all of these groups of photos illustrate some notion of “similarity,” but each is different. Roughly, they are: similarity of color, similarity of texture, and similarity of semantic category. And there are many others that you might imagine as well.

What notion of similarity is best suited for a site like Flickr? Ideally, we’d like to be able to capture multiple types of similarity, but we decided early on that semantic similarity—similarity based on the semantic content of the photos—was vital to facilitate discovery on Flickr. This requires a deep understanding of image content for which we employ deep neural networks.

We have been using deep neural networks at Flickr for a while for various tasks such as object recognition, NSFW prediction, and even prediction of aesthetic quality. For these tasks, we train a neural network to map the raw pixels of a photo into a set of relevant tags, as illustrated below.

Internally, the neural network accomplishes this mapping incrementally by applying a series of transformations to the image, which can be thought of as a vector of numbers corresponding to the pixel intensities. Each transformation in the series produces another vector, which is in turn the input to the next transformation, until finally we have a vector that we specifically constrain to be a list of probabilities for each class we are trying to recognize in the image. To be able to go from raw pixels to a semantic label like “hot air balloon,” the network discards lots of information about the image, including information about  appearance, such as the color of the balloon, its relative position in the sky, etc. Instead, we can extract an internal vector in the network before the final output.

For common neural network architectures, this vector—which we call a “feature vector”—has many hundreds or thousands of dimensions. We can’t necessarily say with certainty that any one of these dimensions means something in particular as we could at the final network output, whose dimensions correspond to tag probabilities. But these vectors have an important property: when you compute the Euclidean distance between these vectors, images containing similar content will tend to have feature vectors closer together than images containing dissimilar content. You can think of this as a way that the network has learned to organize information present in the image so that it can output the required class prediction. This is exactly what we are looking for: Euclidian distance in this high-dimensional feature space is a measure of semantic similarity. The graphic below illustrates this idea: points in the neighborhood around the query image are semantically similar to the query image, whereas points in neighborhoods further away are not.

This measure of similarity is not perfect and cannot capture all possible notions of similarity—it will be constrained by the particular task the network was trained to perform, i.e., scene recognition. However, it is effective for our purposes, and, importantly, it contains information beyond merely the semantic content of the image, such as appearance, composition, and texture. Most importantly, it gives us a simple algorithm for finding visually similar photos: compute the distance in the feature space of a query image to each index image and return the images with lowest distance. Of course, there is much more work to do to make this idea work for billions of images.

Large-scale approximate nearest neighbor search

With an index as large as Flickr’s, computing distances exhaustively for each query is intractable. Additionally, storing a high-dimensional floating point feature vector for each of billions of images takes a large amount of disk space and poses even more difficulty if these features need to be in memory for fast ranking. To solve these two issues, we adopt a state-of-the-art approximate nearest neighbor algorithm called Locally Optimized Product Quantization (LOPQ).

To understand LOPQ, it is useful to first look at a simple strategy. Rather than ranking all vectors in the index, we can first filter a set of good candidates and only do expensive distance computations on them. For example, we can use an algorithm like k-means to cluster our index vectors, find the cluster to which each vector is assigned, and index the corresponding cluster id for each vector. At query time, we find the cluster that the query vector is assigned to and fetch the items that belong to the same cluster from the index. We can even expand this set if we like by fetching items from the next nearest cluster.

This idea will take us far, but not far enough for a billions-scale index. For example, with 1 billion photos, we need 1 million clusters so that each cluster contains an average of 1000 photos. At query time, we will have to compute the distance from the query to each of these 1 million cluster centroids in order to find the nearest clusters. This is quite a lot. We can do better, however, if we instead split our vectors in half by dimension and cluster each half separately. In this scheme, each vector will be assigned to a pair of cluster ids, one for each half of the vector. If we choose k = 1000 to cluster both halves, we have k2= 1000 * 1000 = 1e6 possible pairs. In other words, by clustering each half separately and assigning each item a pair of cluster ids, we can get the same granularity of partitioning (1 million clusters total) with only 2 * 1000 distance computations with half the number of dimensions for a total computational savings of 1000x. Conversely, for the same computational cost, we gain a factor of k more partitions of the data space, providing a much finer-grained index.

This idea of splitting vectors into subvectors and clustering each split separately is called product quantization. When we use this idea to index a dataset it is called the inverted multi-index, and it forms the basis for fast candidate retrieval in our similarity index. Typically the distribution of points over the clusters in a multi-index will be unbalanced as compared to a standard k-means index, but this unbalance is a fair trade for the much higher resolution partitioning that it buys us. In fact, a multi-index will only be balanced across clusters if the two halves of the vectors are perfectly statistically independent. This is not the case in most real world data, but some heuristic preprocessing—like PCA-ing and permuting the dimensions so that the cumulative per-dimension variance is approximately balanced between the halves—helps in many cases. And just like the simple k-means index, there is a fast algorithm for finding a ranked list of clusters to a query if we need to expand the candidate set.

After we have a set of candidates, we must rank them. We could store the full vector in the index and use it to compute the distance for each candidate item, but this would incur a large memory overhead (for example, 256 dimensional vectors of 4 byte floats would require 1Tb for 1 billion photos) as well as a computational overhead. LOPQ solves these issues by performing another product quantization, this time on the residuals of the data. The residual of a point is the difference vector between the point and its closest cluster centroid. Given a residual vector and the cluster indexes along with the corresponding centroids, we have enough information to reproduce the original vector exactly. Instead of storing the residuals, LOPQ product quantizes the residuals, usually with a higher number of splits, and stores only the cluster indexes in the index. For example, if we split the vector into 8 splits and each split is clustered with 256 centroids, we can store the compressed vector with only 8 bytes regardless of the number of dimensions to start (though certainly a higher number of dimensions will result in higher approximation error). With this lossy representation we can produce a reconstruction of a vector from the 8 byte codes: we simply take each quantization code, look up the corresponding centroid, and concatenate these 8 centroids together to produce a reconstruction. Likewise, we can approximate the distance from the query to an index vector by computing the distance between the query and the reconstruction. We can do this computation quickly for many candidate points by computing the squared difference of each split of the query to all of the centroids for that split. After computing this table, we can compute the squared difference for an index point by looking up the precomputed squared difference for each of the 8 indexes and summing them together to get the total squared difference. This caching trick allows us to quickly rank many candidates without resorting to distance computations in the original vector space.

LOPQ adds one final detail: for each cluster in the multi-index, LOPQ fits a local rotation to the residuals of the points that fall in that cluster. This rotation is simply a PCA that aligns the major directions of variation in the data to the axes followed by a permutation to heuristically balance the variance across the splits of the product quantization. Note that this is the exact preprocessing step that is usually performed at the top-level multi-index. It tends to make the approximate distance computations more accurate by mitigating errors introduced by assuming that each split of the vector in the production quantization is statistically independent from other splits. Additionally, since a rotation is fit for each cluster, they serve to fit the local data distribution better.

Below is a diagram from the LOPQ paper that illustrates the core ideas of LOPQ. K-means (a) is very effective at allocating cluster centroids, illustrated as red points, that target the distribution of the data, but it has other drawbacks at scale as discussed earlier. In the 2d example shown, we can imagine product quantizing the space with 2 splits, each with 1 dimension. Product Quantization (b) clusters each dimension independently and cluster centroids are specified by pairs of cluster indexes, one for each split. This is effectively a grid over the space. Since the splits are treated as if they were statistically independent, we will, unfortunately, get many clusters that are “wasted” by not targeting the data distribution. We can improve on this situation by rotating the data such that the main dimensions of variation are axis-aligned. This version, called Optimized Product Quantization (c), does a better job of making sure each centroid is useful. LOPQ (d) extends this idea by first coarsely clustering the data and then doing a separate instance of OPQ for each cluster, allowing highly targeted centroids while still reaping the benefits of product quantization in terms of scalability.

LOPQ is state-of-the-art for quantization methods, and you can find more information about the algorithm, as well as benchmarks, here. Additionally, we provide an open-source implementation in Python and Spark which you can apply to your own datasets. The algorithm produces a set of cluster indexes that can be queried efficiently in an inverted index, as described. We have also explored use cases that use these indexes as a hash for fast deduplication of images and large-scale clustering. These extended use cases are studied here.


We have described our system for large-scale visual similarity search at Flickr. Techniques for producing high-quality vector representations for images with deep learning are constantly improving, enabling new ways to search and explore large multimedia collections. These techniques are being applied in other domains as well to, for example, produce vector representations for text, video, and even molecules. Large-scale approximate nearest neighbor search has importance and potential application in these domains as well as many others. Though these techniques are in their infancy, we hope similarity search provides a useful new way to appreciate the amazing collection of images at Flickr and surface photos of interest that may have previously gone undiscovered. We are excited about the future of this technology at Flickr and beyond.


Yannis Kalantidis, Huy Nguyen, Stacey Svetlichnaya, Arel Cordero. Special thanks to the rest of the Computer Vision and Machine Learning team and the Vespa search team who manages Yahoo’s internal search engine.

This Month’s Writer’s Block

Published 7 Mar 2017 by Dave Robertson in Dave Robertson.


Where in mediawiki "Page deleted" flag located

Published 6 Mar 2017 by Velaro in Newest questions tagged mediawiki - Stack Overflow.

I need to build SQL query, but cannot find column which determines if page was deleted. I see archive table, but can't bind, because data is different

W3C announces antitrust guidance document

Published 6 Mar 2017 by Wendy Seltzer in W3C Blog.

The W3C supports a community including more than 400 member organizations in developing Open Standards for the Open Web Platform. Many of these organizations are competitors in highly competitive markets. Others are researchers, consumers, and regulators. They come together in W3C Working Groups and Interest Groups to develop standards for interoperability: shared languages, formats, and APIs.

The W3C Process supports this work through a framework of consensus-based decision-making, a focus on technical requirements and interop testing, and our Royalty-Free Patent Policy.

As we’re joined by more participants from a wider range of industries, including Payments, Automotive, and Publishing, we wanted to highlight the role Process plays in helping competitors to work together fairly. Accordingly, we published a brief antitrust guidance document reflecting our existing practices.

Antitrust and competition law protect the public by requiring market competitors to act fairly in the marketplace. Open standards are pro-competitive and pro-user because an open, interoperable platform increases the opportunities for innovative competition in and on the Web. We continue to invite wide participation in the work of constructing these standards.

A politics of humanism can help build a just, free and more equal world

Published 6 Mar 2017 by in New Humanist Articles and Posts.

Intolerance and bigotry are in the ascendant. Owen Jones surveys the challenges ahead.

MediaWiki infobox/scribunto fails to process lua/template properly

Published 5 Mar 2017 by helion3 in Newest questions tagged mediawiki - Stack Overflow.

Trying to get Infoboxes working on an install of mediawiki 1.28 (on ubuntu 16.04 x64).


Here's the scribunto config from LocalSettings.php:

require_once "$IP/extensions/Scribunto/Scribunto.php";
$wgScribuntoDefaultEngine = 'luastandalone';
$wgScribuntoEngineConf['luastandalone']['luaPath'] = '/var/www/public/wiki/extensions/Scribunto/engines/LuaStandalone/binaries/lua5_1_5_linux_64_generic/lua';
$wgScribuntoEngineConf['luastandalone']['errorFile'] = '/var/www/public/wiki/error.txt';

I've imported the wikipedia template without error. However, visiting any pages that I've imported, I see:

Lua error: Internal error: The interpreter has terminated with signal "".
No further details are available.

If I visit Template:Infobox I see:

-- -- This module implements -- -- This module implements Template loop detected: Template:Infobox --

I have no idea where to go. I'm importing the file without modification so there's some broken piece in my setup but I can't figure out what.

EDIT: Looks like it is perms issue. I'm doing this in vagrant and while my native OS shows the file has execute perms, the vagrant OS doesn't.

Week #7: 999 assists and no more kneeling

Published 4 Mar 2017 by legoktm in The Lego Mirror.

Joe Thornton is one assist away from reaching 1,000 in his career. He's a team player - the recognition of scoring a goal doesn't matter to him, he just wants his teammates to score. And his teammates want him to achieve this milestone too, as shown by Sharks passing to Thornton and him passing back instead of them going directly for the easy empty netter.

Oh, and now that the trade deadline has passed with no movement on the goalie front, it's time for In Jones We Trust:

via /u/MisterrAlex on reddit

In other news, Colin Kaepernick announced that he's going to be a free agent and opted out of the final year of his contract. But in even bigger news, he said he will stop kneeling for the national anthem. I don't know if he is doing that to make himself more marketable, but I wish he would have stood (pun intended) with his beliefs.

google api description on city location

Published 3 Mar 2017 by martinek in Newest questions tagged mediawiki - Stack Overflow.

I search request from google api. I want to get data description on city or location by latitude or name city. In language js.

I have use google api or wiki api?

For exampleenter image description here

Right box with information on city California and nearby interest place in California top box.

FastMail Customer Stories – CoinJar

Published 2 Mar 2017 by David Gurvich in FastMail Blog.

Welcome to our first Customer Story video for 2017 featuring CoinJar Co-Founder and CEO Asher Tan.

CoinJar is Australia’s largest Bitcoin exchange and wallet, and it was while participating in a startup accelerator program that Asher had the idea for creating an easier way to buy, sell and spend the digital currency Bitcoin.

“We had decided to work on some Bitcoin ideas in the consumer space, which were quite lacking at the time,” Asher says.

Participating in the startup process was instrumental in helping Asher and his Co-Founder Ryan Zhou to really hone in on what type of business they needed to build.

CoinJar launched in Melbourne in 2013 and despite experiencing rapid success, Asher is quick to point out that his is a tech business that’s still working within a very new industry.

“It’s a very new niche industry and finding what works as a business, what people want, I think is an ongoing process. You’re continually exploring, but I think that’s what makes it exciting,” Asher says.

Asher says that one of the great things about launching a startup is you can choose the tools you want. Initially starting out with another email provider, Asher and Ryan were soon underwhelmed by both the performance and cost.

“The UI was pretty slow, the package was pretty expensive as well. There was also a lack of flexibility of some of the tools we wanted to use … so we were looking for other options and FastMail came up,” Asher says.

And while most of CoinJar’s business tools are self-hosted, they decided that FastMail was going to be the best choice to meet their requirements for secure, reliable and private email hosting.

Today CoinJar has team members all around the world and uses FastMail’s calendar and timezone feature to keep everyone working together.

CoinJar continues to innovate, recently launching a debit card that allows their customers to buy groceries using Bitcoin.

We’d like to thank Asher for his time and also Ben from Benzen Video Productions for helping us to put this story together.

You can learn more about CoinJar at

“We need critical reflection, individually as well as in the company of others”

Published 2 Mar 2017 by in New Humanist Articles and Posts.

Q&A with Nobel Prize winning economist Amartya Sen.

Songs for the Beeliar Wetlands

Published 2 Mar 2017 by Dave Robertson in Dave Robertson.

The title track of the forthcoming Kiss List album has just been included on an awesome fundraising compilation of 17 songs by local songwriters for the Beeliar wetlands. All proceeds go to #rethinkthelink. Get it while its hot! You can purchase the whole album or just the songs you like.

Songs for the Beeliar Wetlands: Original Songs by Local Musicians (Volume 1) by Dave Robertson and The Kiss List


How to replace WikiError::isError and WikiErrorMsg?

Published 2 Mar 2017 by Chris Dji in Newest questions tagged mediawiki - Stack Overflow.

After searching by using Google for 1 hour, I found nothing. So in the version v1.23 they removed the classes "WikiError" and "WikiErrorMsg". How can I replace these functions from these classes in my code?

Stepping Off Meets the Public

Published 1 Mar 2017 by Tom Wilson in tom m wilson.

At the start of February I launched my new book, Stepping Off: Rewilding and Belonging in the South-West, at an event at Clancy’s in Fremantle.  On Tuesday evening this week I was talking about the book down at Albany Library.     As I was in the area I decided to camp for a couple of […]

What’s new in the W3C Process 2017?

Published 1 Mar 2017 by Philippe le Hegaret in W3C Blog.

As of today, W3C is using a new W3C Process. You can read the full list of substantive changes but I’d like to highlight 2 changes that are relevant for the W3C community:

  1. Added a process to make a Recommendation Obsolete: An obsolete specification is one that the W3C community has decided should no longer be used. For example, it may no longer represent best practices, or it may not have received wide adoption and seems unlikely to do so in the future. The status of an obsolete specification remains active under the W3C Patent Policy, but it is not recommended for future implementation.
  2. Simplified the steps to publish Edited Recommendations if the new revision makes only editorial changes to the previous Recommendation. This allows W3C to make corrections to its Recommendations without requiring technical review of the proposed changes while keeping an objective to ensure adequate notice.

The W3C Process Document is developed by the W3C Advisory Board‘s Process Task Force working within the Revising W3C Process Community Group. Please send comments about our Process to

We’re working on revamping and cleaning our entry page on Standards and Drafts and we’ll make sure to take those Process updates into account.

Media Wiki Consumer Propose Permission

Published 1 Mar 2017 by user2210996 in Newest questions tagged mediawiki - Stack Overflow.

When I try to create new consumer, I get permission error as show in image below. enter image description here

I added the following to LocalSetting.php to give the permission but it did not work for me.

$wgGroupPermissions = array("mwoauthproposeconsumer","mwoauthupdateownconsumer","mwoauthmanageconsumer")

Digital Deli, reading history in the present tense

Published 1 Mar 2017 by Carlos Fenollosa in Carlos Fenollosa — Blog.

Digital Deli: The Comprehensive, User Lovable Menu Of Computer Lore, Culture, Lifestyles, And Fancy is an obscure book published in 1984. I found about it after learning that the popular Steve Wozniak article titled "Homebrew and How the Apple Came to Be" belonged to a compilation of short articles.

The book

I'm amazed that this book isn't more cherished by the retrocomputing community, as it provides an incredible insight into the state of computers in 1984. We've all read books about their history, but Digital Deli provides a unique approach: it's written in present tense.

Articles are written with a candid and inspiring narrative. Micro computers were new back then, and the authors could only speculate about how they might change the world in the future.

The book is adequately structured in sections which cover topics from the origins of computing, Silicon Valley startups, and reviews of specific systems. But the most interesting part for me are not the tech articles, but rather the sociological essays.

There are texts on how families welcome computers to the home, the applications of artificial intelligence, micros on Wall Street and computers on the classroom.

How the Source works

Fortunately, a copy of the book has been preserved online, and I highly encourage you to check it out and find some copies online

Besides Woz explaining how Apple was founded, don't miss out on Paul Lutus describing how he programmed AppleWriter in a cabin in the woods, Les Solomon envisioning the "magic box" of computing, Ted Nelson on information exchange and his Project Xanadu, Nolan Bushnell on video games, Bill Gates on software usability, the origins of the Internet... the list goes on and on.

Les Solomon

If you love vintage computing you will find a fresh perspective, and if you were alive during the late 70s and early 80s you will feel a big nostalgia hit. In any case, do yourself a favor, grab a copy of this book, and keep it as a manifesto of the greatest revolution in computer history.

Tags: retro, books

Comments? Tweet  

Web Content Accessibility Guidelines 2.1 First Public Working Draft

Published 28 Feb 2017 by Joshue O Connor in W3C Blog.

The Accessibility Guidelines Working Group (AG WG) is very happy to announce that the first public working draft of the new Web Content Accessibility Guidelines (WCAG) 2.1 is available. This new version aims to build effectively on the previous foundations of WCAG 2.0 with particular attention being given to the three areas of accessibility on small-screen and touch mobile devices, to users with low vision, and to users with cognitive or learning disabilities.

WCAG 2.0 is a well established vibrant standard with a high level of adoption worldwide. WCAG 2.0 is still broadly applicable to many old and new technologies covering a broad range of needs. However, technology doesn’t sleep and as it marches on brings new challenges for developers and users alike. WCAG 2.1 aims to address these diverse challenges in a substantial way. To do this, over the last three years the (newly renamed) AG WG undertook extensive research of the current user requirements for accessible content creation.

This work took place in task forces that brings together people with specific skills and expertise relating to these areas accessibility on mobile devices, users with low vision and users with cognitive or learning disabilities. Together this work forms the substantial basis of the new WCAG 2.1 draft.

WCAG 2.1 was initially described in the blog WCAG 2.1 under exploration, which proposed changing from an earlier model of WCAG 2.0 extensions to develop a dot-release of the guidelines. The charter to develop WCAG 2.1 was approved in January 2017. We are also happy to say that we have delivered the first public working draft within the charter’s promised timeline.

So what has the working group been doing? Working very hard looking at how to improve WCAG 2.0! To successfully iterate such a broad and deep standard has not been easy. There has been extensive research, discussion and debate within the task forces and the wider working group in order to better understand the interconnectedness and relationships between diverse and sometimes competing user requirements as we develop new success criteria.

This extensive work has resulted in the development of around 60 new success criteria, of which 28 are now included in this draft, to be used as measures of conformance to the standard. These success criteria have been collected from the three task forces as well as individual submissions. All of these success criteria must be vetted against the acceptance criteria before being formally accepted as part of the guidelines. As WCAG is an international standard and widely adopted the working group reviews everything very carefully, at this point only three new proposed success criteria have yet cleared the formal Working Group review process, and these are still subject to change based on public feedback. The draft also includes many proposed Success Criteria that are under consideration but have not yet been formally accepted by the Working Group.

Further review and vetting is necessary but we are very happy to present our work to the world. This is a first draft and not a final complete version. In addition to refining the accepted and proposed Success Criteria included in the draft, the Working Group will continue to review additional proposals which could appear formally in a future version. Through the course of the year, the AG WG plans to process the remaining success criteria along with the input we gather from the public. The group will then produce a semi-final version towards the end of this year along with further supporting “Understanding WCAG 2.1” (like Understanding WCAG 2.0) material.

There is no guarantee that a proposed success criterion appearing in this draft will make it to the final guidelines. Public feedback is really important to us—and based on this feedback the proposed success criteria could be iterated further. We want to hear from all users, authors, tool developers and policy makers about any benefits arising from the new proposed success criteria as well as how achievable you feel it is to conform to their requirements. The AG WG is working hard to ensure backwards compatibility between WCAG 2.1 and WCAG 2.0. However, the full extent and manner of how WCAG 2.1 will build on WCAG 2.0 is still being worked out.

The working group’s intention is for the new proposed success criteria to provide good additional coverage for users with cognitive or learning disabilities, low vision requirements, and users of mobile devices with small screens and touch interfaces. Mapping the delta between these diverse user requirements is rewarding and challenging and this WCAG 2.1 draft has been made possible by the diverse skills and experience brought to bear on this task by the AG WG members.

The AG WG also has a Accessibility Conformance Testing (ACT) Task Force that aims to develop a framework and repository of test rules, to promote a unified interpretation of WCAG among different web accessibility test tools; as well as a 3.0 guidelines project called ‘Silver’ that forecasts more significant changes following a research-focused, user-centered design methodology.

So while WCAG 2.1 is technically a “dot”-release, it is substantial in its reach yet also deliberately constrained to effectively build on the existing WCAG 2.0 framework and practically address issues for users today.


Published 27 Feb 2017 by Tim Berners-Lee in W3C Blog.

The question which has been debated around the net is whether W3C should endorse the Encrypted Media Extensions (EME) standard which allows a web page to include encrypted content, by connecting an existing underlying Digital Rights Management (DRM) system in the underlying platform. Some people have protested “no”, but in fact I decided the actual logical answer is “yes”. As many people have been so fervent in their demonstrations, I feel I owe it to them to explain the logic. My hope is, as there are many things which need to be protested and investigated and followed up in this world, that the energy which has been expended on protesting EME can be re-channeled other things which really need it. Of the things they have argued along the way there have also been many things I have agreed with. And to understand the disagreement we need to focus the actual question, whether W3C should recommend EME.

The reason for recommending EME is that by doing so, we lead the industry who developed it in the first place to form a simple, easy to use way of putting encrypted content online, so that there will be interoperability between browsers. This makes it easier for web developers and also for users. People like to watch Netflix (to pick one example). People spend a lot of time on the web, they like to be able to embed Netflix content in their own web pages, they like to be able to link to it. They like to be able to have discussions where they express what they think about the content where their comments and the content can all be linked to.

Could they put the content on the web without DRM? Well, yes, for a huge amount of video content is on the web without DRM. It is only the big expensive movies where to put content on the web unencrypted makes it too easy for people to copy it, and in reality the utopian world of people voluntarily paying full price for content does not work. (Others argue that the whole copyright system should be dismantled, and they can do that in the legislatures and campaign to change the treaties, which will be a long struggle, and meanwhile we do have copyright).

Given DRM is a thing,…

When a company decides to distribute content they want to protect, they have many choices. This is important to remember.

If W3C did not recommend EME then the browser vendors would just make it outside W3C. If EME did not exist, vendors could just create new Javascript based versions. And without using the web at all, it is so easy to invite ones viewers to switching to view the content on a proprietary app. And if the closed platforms prohibited DRM in apps, then the large content providers would simply distribute their own set-top boxes and game consoles as the only way to watch their stuff.

If the Director Of The Consortium made a Decree that there would be No More DRM in fact nothing would change. Because W3C does not have any power to forbid anything. W3C is not the US Congress, or WIPO, or a court. It would perhaps have shortened the debate. But we would have been distracted from important things which need thought and action on other issues.

Well, could W3C make a stand and just because DRM is a bad thing for users, could just refuse to work on DRM and push back wherever they could on it? Well, that would again not have any effect, because the W3C is not a court or an enforcement agency. W3C is a place for people to talk, and forge consensus over great new technology for the web. Yes, there is an argument made that in any case, W3C should just stand up against DRM, but we, like Canute, understand our power is limited.

But importantly, there are reasons why pushing people away from web is a bad idea: It is better for users for the DRM to be done through EME than other ways.

  1. When the content is in a web page, it is part of the web.
  2. The EME system can ‘sandbox’ the DRM code to limit the damage it can do to the user’s system
  3. The EME system can ‘sandbox’ the DRM code to limit the damage it can do to the user’s privacy.

As mentioned above, when a provider distributes a movie, they have a lot of options. They have different advantages and disadvantages. An important issue here is how much the publisher gets to learn about the user.

So in summary, it is important to support EME as providing a relatively safe online environment in which to watch a movie, as well as the most convenient, and one which makes it a part of the interconnected discourse of humanity.

I should mention that the extent to which the sandboxing of the DRM code protects the user is not defined by the EME spec at all, although current implementations in at least Firefox and Chrome do sandbox the DRM.

Spread to other media

Do we worry that having put movies on the web, then content providers will want to switch also to use it for other media such as music and books? For music, I don’t think so, because we have seen industry move consciously from a DRM-based model to an unencrypted model, where often the buyer’s email address may be put in a watermark, but there is no DRM.

For books, yes this could be a problem, because there have been a large number of closed non-web devices which people are used to, and for which the publishers are used to using DRM. For many the physical devices have been replaced by apps, including DRM, on general purpose devices like closed phones or open computers. We can hope that the industry, in moving to a web model, will also give up DRM, but it isn’t clear.

We have talked about the advantages of different ways of using DRM in distributing movies. Now let us discuss some of the problems with DRM systems in general.

Problems with DRM

Much of this blog post is W3C’s technical perspective on EME which I provide wearing my Director’s hat – but in the following about DRM and the DMCA, that (since this is a policy issue), I am expressing my personal opinions.

Problems for users

There are many issues with DRM, from the user’s point of view. These have been much documented elsewhere. Here let me list these:

DRM systems are generally frustrating for users. Some of this can be compounded by things like attempts to region-code a licence so the user can only access when they are in a particular country, confusion between “buying” and “renting” something for a fixed term, and issues when content suppliers cease to exist, and all “bought” things become inaccessible.

Despite these issues, users continue to buy DRM-protected content.

Problems for developers

DRM prevents independent developers from building different playback systems that interact with the video stream, for example, to add accessibility features, such as speeding up or slowing down playback.

Problems for Posterity

There is a possibility that we end up in decades time with no usable record of these movies, because either their are still encrypted, or because people didn’t bother taking copies of them at the time because the copies would have been useless to them. One of my favorite suggestions is that anyone copyrighting a movie and distributing it encrypted in any way MUST deposit an unencrypted copy with a set of copyright libraries which would include the British Library, the Library of Congress, and the Internet Archive.

Problems with Laws

Much of the push back against EME has been based on push back against DRM which has been based on specific important problems with certain laws.

The law most discussed is the US Digital Millennium Copyright Act (DMCA). Other laws exist in other countries which to a greater or lesser extent resemble the DMCA. Some of these have been brought up in the discussions, but we do not have an exhaustive list or analysis of them. It is worth noting that US has spent a lot of energy using the various bilateral and multilateral agreements to persuade other countries into adopting laws like the DMCA. I do not go into the laws in other countries here. I do point out though that this cannot be dismissed as a USA-only problem. That said, let us go into the DMCA in more detail.

Whatever else you would like to change about the Copyright system as a whole, there are particular parts of the DMCA, specifically section 1201, which put innocent security researchers at risk of dire punishment if they are deemed to have thrown light on any DRM system.

There was an attempt at one point in the W3C process to refuse to bring the EME spec forward until all the working group participants would agree to indemnify security researchers under this section. To cut a very long story short, the attempt failed, and historians may point to the lack of leverage the EME spec had to be used in this way, and the difference between the set of companies in the working group and the set of companies which would be likely to sue over the DMCA, among other reasons.

Security researchers

There is currently (2017-02) a related effort at W3C to encourage companies to set up ‘bug bounty” programs to the extent that at least they guarantee immunity from prosecution to security researchers who find and report bugs in their systems. While W3C can encourage this, it can only provide guidelines, and cannot change the law. I encourage those who think this is important to help find a common set of best practice guidelines which companies will agree to. A first draft of some guidelines was announced. Please help make them effective and acceptable and get your company to adopt them.

Obviously a more logical thing would be to change the law, but the technical community seems to have become resigned to not being able to positive effect on the US legislative system due to well documented problems with that system.

This is something where public pressure could perhaps be beneficial, on the companies to agree on and adopt protection, not to mention changing the root cause in the DMCA. W3C would like to hear, by the way of any examples of security researchers having this sort of problem, so that we can all follow this.

The future web

The web has to be universal, to function at all. It has to be capable of holding crazy ideas of the moment, but also the well polished ideas of the century. It must be able to handle any language and culture. It must be able to include information of all types, and media of many genres. Included in that universality is that it must be able to support free stuff and for-pay stuff, as they are all part of this world. This means that it is good for the web to be able to include movies, and so for that, it is better for HTML5 to have EME than to not have it.


The age of bullshit

Published 27 Feb 2017 by in New Humanist Articles and Posts.

We’ve all done it. You say what you have to say to get things done, with little regard for the truth. But does it matter?

Mediawiki should react to dynamic created content

Published 27 Feb 2017 by Sascha R. in Newest questions tagged mediawiki - Stack Overflow.

I have an extension that dynamically creates content for some pages.

E.g. I have creates headlines with the <html> <h1>, <h2> and <h3>. I want my mediawiki to react to the headline tags to create a directory dynamically.

I already tried using == in the specific tags in my extension, but mediawiki simply creates the string which will be displayed.

How can I achieve my goal?

Thanks in advance.


Published 26 Feb 2017 by fabpot in Tags from Twig.


Published 26 Feb 2017 by fabpot in Tags from Twig.

Week #6: Barracuda win streak is great news for the Sharks

Published 24 Feb 2017 by legoktm in The Lego Mirror.

The San Jose Barracuda, the Sharks AHL affiliate team, is currently riding a 13 game winning streak, and is on top of the AHL — and that's great news for the Sharks.

Ever since the Barracuda moved here from Worcester, Mass., it's only been great news for the Sharks. Because they play in the same stadium, sending players up or down becomes as simple as a little paperwork and asking them to switch locker rooms, not cross-country flights.

This allows the Sharks to have a significantly deeper roster, since they can call up new players at a moment's notice. So the Barracuda's win streak is great news for Sharks fans, since it demonstrates how even the minor league players are ready to play in the pros.

And if you're watching hockey, be on the watch for Joe Thornton to score his 1,000 assist! (More on that next week).

How can I keep mediawiki not-yet-created pages from cluttering my google webmaster console with 404s?

Published 24 Feb 2017 by Sean in Newest questions tagged mediawiki - Webmasters Stack Exchange.

we have a mediawiki install as part of our site. As on all wikis people will add links for not yet created pages (red links). When followed these links return a 404 status (as there is no content) along with an invite to add content.

I'm not getting buried in 404 notices in google webmaster console for this site. Is there a best way to handle this?

Thanks for any help.

Cloudflare & FastMail: Your info is safe

Published 24 Feb 2017 by Helen Horstmann-Allen in FastMail Blog.

This week, Cloudflare disclosed a major security breach, affecting hundreds of thousands of services’ customer security. While FastMail uses Cloudflare, your information is safe, and it is not necessary to change your password.

The Cloudflare security breach affects services using Cloudflare to serve website information. When you go to our website (or read your email, or send your password), you are always connecting directly to a FastMail server. We use Cloudflare to serve domain name information only, which does not contain any sensitive or personal customer data.

However, while we do not advocate password reuse, we accept it happens. If your FastMail password is the same as any other web service you use, please change them both immediately (also, use a password manager, and enable two-step verification)! For more information about passwords and security, check out Lock Up Your Passwords and our password and security blog series, starting here.

For more information on the Cloudflare security breach, please check out their blog. Why does FastMail use Cloudflare? DDOSes that target our DNS can be mitigated with Cloudflare's capacity. If you have any other questions for us, please contact support.

This post had been amended to add remediation instructions in the third paragraph for users who may have a reused password.

The Other Half

Published 24 Feb 2017 by Jason Scott in ASCII by Jason Scott.

On January 19th of this year, I set off to California to participate in a hastily-arranged appearance in a UCLA building to talk about saving climate data in the face of possible administrative switchover. I wore a fun hat, stayed in a nice hotel, and saw an old friend from my MUD days for dinner. The appearance was a lot of smart people doing good work and wanting to continue with it.

While there, I was told my father’s heart surgery, which had some complications, was going to require an extended stay and we were running out of relatives and companions to accompany him. I booked a flight for seven hours after I’d arrive back in New York to go to North Carolina and stay with him. My father has means, so I stayed in a good nearby hotel room. I stayed with him for two and a half weeks, booking ten to sixteen hour days to accompany him through a maze of annoyances, indignities, smart doctors, variant nurses ranging from saints to morons, and generally ensure his continuance.

In the middle of this, I had a non-movable requirement to move the manuals out of Maryland and send them to California. Looking through several possibilities, I settled with: Drive five hours to Maryland from North Carolina, do the work across three days, and drive back to North Carolina. The work in Maryland had a number of people helping me, and involved pallet jacks, forklifts, trucks, and crazy amounts of energy drinks. We got almost all of it, with a third batch ready to go. I drove back the five hours to North Carolina and caught up on all my podcasts.

I stayed with my father another week and change, during which I dented my rental car, and hit another hard limit: I was going to fly to Australia. I also, to my utter horror, realized I was coming down with some sort of cold/flu. I did what I could – stabilized my father’s arrangements, went into the hotel room, put on my favorite comedians in a playlist, turned out the lights, drank 4,000mg of Vitamin C, banged down some orange juice, drank Mucinex, and covered myself in 5 blankets. I woke up 15 hours later in a pool of sweat and feeling like I’d crossed the boundary with that disease. I went back to the hospital to assure my dad was OK (he was), and then prepped for getting back to NY, where I discovered almost every flight for the day was booked due to so many cancelled flights the previous day.

After lots of hand-wringing, I was able to book a very late flight from North Carolina to New York, and stayed there for 5 hours before taking a 25 hour two-segment flight through Dubai to Melbourne.

I landed in Melbourne on Monday the 13th of February, happy that my father was stable back in the US, and prepping for my speech and my other commitments in the area.

On Tuesday I had a heart attack.

We know it happened then, or began to happen, because of the symptoms I started to show – shortness of breath, a feeling of fatigue and an edge of pain that covered my upper body like a jacket. I was fucking annoyed – I felt like I was just super tired and needed some energy, and energy drinks and caffiene weren’t doing the trick.

I met with my hosts for the event I’d do that Saturday, and continued working on my speech.

I attended the conference for that week, did a couple interviews, saw some friends, took some nice tours of preservation departments and discussed copyright with very smart lawyers from the US and Australia.

My heart attack continued, blocking off what turned out to be a quarter of my bloodflow to my heart.

This was annoying me but I didn’t know it was, so according to my fitbit I walked 25 miles, walked up 100 flights of stairs, and maintained hours of exercise to snap out of it, across the week.

I did a keynote for the conference. The next day I hosted a wonderful event for seven hours. I asked for a stool because I said I was having trouble standing comfortably. They gave me one. I took rests during it, just so the DJ could get some good time with the crowds. I was praised for my keeping the crowd jumping and giving it great energy. I’d now had been having a heart attack for four days.

That Sunday, I walked around Geelong, a lovely city near Melbourne, and ate an exquisite meal at Igni, a restaurant whose menu basically has one line to tell you you’ll be eating what they think you should have. Their choices were excellent. Multiple times during the meal, I dozed a little, as I was fatigued. When we got to the tram station, I walked back to the apartment to get some rest. Along the way, I fell to the sidewalk and got up after resting.

I slept off more of the growing fatigue and pain.

The next day I had a second exquisite meal of the trip at Vue Le Monde, a meal that lasted from about 8pm to midnight. My partner Rachel loves good meals and this is one of the finest you can have in the city, and I enjoyed it immensely. It would have been a fine last meal. I’d now had been experiencing a heart attack for about a week.

That night, I had a lot of trouble sleeping. The pain was now a complete jacket of annoyance on my body, and there was no way to rest that didn’t feel awful. I decided medical attention was needed.

The next morning, Rachel and I walked 5 blocks to a clinic, found it was closed, and walked further to the RealCare Health Clinic. I was finding it very hard to walk at this point. Dr. Edward Petrov saw me, gave me some therapy for reflux, found it wasn’t reflux, and got concerned, especially as having my heart checked might cost me something significant. He said he had a cardiologist friend who might help, and he called him, and it was agreed we could come right over.

We took a taxi over to Dr. Georg Leitl’s office. He saw me almost immediately.

He was one of those doctors that only needed to take my blood pressure and check my heart with a stethoscope for 30 seconds before looking at me sadly. We went to his office, and he told me I could not possibly get on the plane I was leaving on in 48 hours. He also said I needed to go to Hospital very quickly, and that I had some things wrong with me that needed attention.

He had his assistants measure my heart and take an ultrasound, wrote something on a notepad, put all the papers in an envelope with the words “SONNY PALMER” on them, and drove me personally over in his car to St. Vincent’s Hospital.

Taking me up to the cardiology department, he put me in the waiting room of the surgery, talked to the front desk, and left. I waited 5 anxious minutes, and then was bought into a room with two doctors, one of whom turned out to be Dr. Sonny Palmer.

Sonny said Georg thought I needed some help, and I’d be checked within a day. I asked if he’d seen the letter with his name on it. He hadn’t. He went and got it.

He came back and said I was going to be operated on in an hour.

He also explained I had a rather blocked artery in need of surgery. Survival rate was very high. Nerve damage from the operation was very unlikely. I did not enjoy phrases like survival and nerve damage, and I realized what might happen very shortly, and what might have happened for the last week.

I went back to the waiting room, where I tweeted what might have been my possible last tweets, left a message for my boss Alexis on the slack channel, hugged Rachel tearfully, and then went into surgery, or potential oblivion.

Obviously, I did not die. The surgery was done with me awake, and involved making a small hole in my right wrist, where Sonny (while blasting Bon Jovi) went in with a catheter, found the blocked artery, installed a 30mm stent, and gave back the blood to the quarter of my heart that was choked off. I listened to instructions on when to talk or when to hold myself still, and I got to watch my beating heart on a very large monitor as it got back its function.

I felt (and feel) legions better, of course – surgery like this rapidly improves life. Fatigue is gone, pain is gone. It was also explained to me what to call this whole event: a major heart attack. I damaged the heart muscle a little, although that bastard was already strong from years of high blood pressure and I’m very young comparatively, so the chances of recovery to the point of maybe even being healthier than before are pretty good. The hospital, St. Vincents, was wonderful – staff, environment, and even the food (incuding curry and afternoon tea) were a delight. My questions were answered, my needs met, and everyone felt like they wanted to be there.

It’s now been 4 days. I was checked out of the hospital yesterday. My stay in Melbourne was extended two weeks, and my hosts (MuseumNext and ACMI) paid for basically all of the additional AirBNB that I’m staying at. I am not cleared to fly until the two weeks is up, and I am now taking six medications. They make my blood thin, lower my blood pressure, cure my kidney stones/gout, and stabilize my heart. I am primarily resting.

I had lost a lot of weight and I was exercising, but my cholesterol was a lot worse than anyone really figured out. The drugs and lifestyle changes will probably help knock that back, and I’m likely to adhere to them, unlike a lot of people, because I’d already been on a whole “life reboot” kick. The path that follows is, in other words, both pretty clear and going to be taken.

Had I died this week, at the age of 46, I would have left behind a very bright, very distinct and rather varied life story. I’ve been a bunch of things, some positive and negative, and projects I’d started would have lived quite neatly beyond my own timeline. I’d have also left some unfinished business here and there, not to mention a lot of sad folks and some extremely quality-variant eulogies. Thanks to a quirk of the Internet Archive, there’s a little statue of me – maybe it would have gotten some floppy disks piled at its feet.

Regardless, I personally would have been fine on the accomplishment/legacy scale, if not on the first-person/relationships/plans scale. That my Wikipedia entry is going to have a different date on it than February 2017 is both a welcome thing and a moment to reflect.

I now face the Other Half, whatever events and accomplishments and conversations I get to engage in from this moment forward, and that could be anything from a day to 100 years.

Whatever and whenever that will be, the tweet I furiously typed out on cellphone as a desperate last-moment possible-goodbye after nearly a half-century of existence will likely still apply:

“I have had a very fun time. It was enormously enjoyable, I loved it all, and was glad I got to see it.”


Three take aways to understand Cloudflare's apocalyptic-proportions mess

Published 24 Feb 2017 by Carlos Fenollosa in Carlos Fenollosa — Blog.

It turns out that Cloudflare's proxies have been dumping uninitialized memory that contains plain HTTPS content for an indeterminate amount of time. If you're not familiar with the topic, let me summarize it: this is the worst crypto news in the last 10 years.

As usual, I suggest you read the HN comments to understand the scandalous magnitude of the bug.

If you don't see this as a news-opening piece on TV it only confirms that journalists know nothing about tech.

How bad is it, really? Let's see

I'm finding private messages from major dating sites, full messages from a well-known chat service, online password manager data, frames from adult video sites, hotel bookings. We're talking full HTTPS requests, client IP addresses, full responses, cookies, passwords, keys, data, everything

If the bad guys didn't find the bug before Tavis, you may be on the clear. However, as usual in crypto, you must assume that any data you submitted through a Cloudflare HTTPS proxy has been compromised.

Three take aways

A first take away, crypto may be mathematically perfect but humans err and the implementations are not. Just because something is using strong crypto doesn't mean it's immune to bugs.

A second take away, MITMing the entire Internet doesn't sound so compelling when you put it that way. Sorry to be that guy, but this only confirms that the centralization of the Internet by big companies is a bad idea.

A third take away, change all your passwords. Yep. It's really that bad. Your passwords and private requests may be stored somewhere, on a proxy or on a malicious actor's servers.

Well, at least change your banking ones, important services like email, and master passwords on password managers -- you're using one, right? RIGHT?

You can't get back any personal info that got leaked but at least you can try to minimize the aftershock.

Update: here is a provisional list of affected services. Download the full list, export your password manager data into a csv file, and compare both files by using grep -f sorted_unique_cf.txt your_passwords.csv.

Afterwards, check the list of potentially affected iOS apps

Let me conclude by saying that unless you were the victim of a targeted attack it's improbable that this bug is going to affect you at all. However, that small probability is still there. Your private information may be cached somewhere or stored on a hacker's server, waiting to be organized and leaked with a flashy slogan.

I'm really sorry about the overly dramatic post, but this time it's for real.

Tags: security, internet, news

Comments? Tweet  

DigitalOcean, Your Data, and the Cloudflare Vulnerability

Published 23 Feb 2017 by DigitalOcean in DigitalOcean Blog.

Over the course of the last several hours, we have received a number of inquiries about the Cloudflare vulnerability reported on February 23, 2017. Since the information release, we have been told by Cloudflare that none of our customer data has appeared in search caches. The DigitalOcean security team has done its own research into the issue, and we have not found any customer data present in the breach.

Out of an abundance of caution, DigitalOcean's engineering teams have reset all session tokens for our users, which will require that you log in again.

We recommend that you do the following to further protect your account:

Again, we would like to reiterate that there is no evidence that any customer data has been exposed as a result of this vulnerability, but we care about your security. So we are therefore taking this precaution as well as continuing to monitor the situation.

Nick Vigier, Director of Security

The localhost page isn’t working on MediaWiki

Published 23 Feb 2017 by hasanghaforian in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I want to use Widget PDF to embed PDF files on my MediaWiki pages. So at first installed Extension:Widgets on MediaWiki and it seems it is installed (I can see it in Installed extensions list in Special:Version of the Wiki). The I copied and pasted the entire source of the PDF widget code page into a page called Widget:PDF on my Wiki:

<big>This widget allows you to '''embed PDF files''' on your wiki page.</big>

Created by [ühler Wilhelm Bühler] and adapted by [ Karsten Hoffmeyer].

== Using this widget ==
For information on how to use this widget, see [ widget description page on].

== Copy to your site ==
To use this widget on your site, just install [ MediaWiki Widgets extension] and copy the [{{fullurl:{{FULLPAGENAME}}|action=edit}} full source code] of this page to your wiki as page '''{{FULLPAGENAME}}'''.
</noinclude><includeonly><object class="pdf-widget" data="<!--{$url|validate:url}-->" type="application/pdf" wmode="transparent" style="z-index: 999; height: 100%; min-height: <!--{$height|escape:'html'|default:680}-->px; width: 100%; max-width: <!--{$width|escape:'html'|default:960}-->px;"><param name="wmode" value="transparent">
<p>Currently your browser does not use a PDF plugin. You may however <a href="<!--{$url|validate:url}-->">download the PDF file</a> instead.</p></object></includeonly>

My PDF file is under this URL:


And it's name is File:GraphicsandAnimations-Devoxx2010.pdf. So as described here, I added this code to my Wiki page:


But this error occured:

The localhost page isn’t working
localhost is currently unable to handle this request. 

What I did:

  1. Also I tried this (original example of the Widget PDF)


    But result was the same.

  2. I read Extension talk:Widgets but did not find any thing.

  3. I opened Chrome DevTools (Ctrl+Shift+I), but there was no error.

How I can solve the problem?


After some times, I tried to uninstall Widget PDF and Extension:Widgets and reinstall them. So I removed Extension:Widgets files/folder from $IP/extensions/ and also deleted Widget:PDF page from Wiki. Then I installed Extension:Widgets again, but now, I can not open the Wiki pages at all (I see above error again), unless I delete require_once "$IP/extensions/Widgets/Widgets.php"; from LocalSettings.php. So I even cannot try to load Extension:Widgets.

Now I see this error in DevTools:

Failed to load resource: the server responded with a status of 500 (Internal Server Error)

Also after uninstalling Extension:Widgets, I tried Extension:PDFEmbed and unfortunately again I saw above error.

Making it easier to share annotations on the Web

Published 23 Feb 2017 by Timothy Cole in W3C Blog.

The W3C has announced the publication of three new standards aimed to enable an ecosystem of interoperable products that let the world comment on, describe, tag, and link any resource on the Web. Many websites already allow comments, but current annotation systems rely on unique, usually proprietary technologies chosen and provided by publishers. Notes cannot be shared easily across the Web and comments about a Web page can only be saved and viewed via a single website. Readers cannot select their own tools, choose their own service providers or bring their own communities. The adoption of the Web Annotation standards will spell the end of the phrase “Don’t read the comments!”, returning power to the readers decide where and how they provide and consume such feedback.

What the Web Annotation standards do

The three new standards describe how to precisely identify the target, body and metadata of an annotation of a Web resource. They provide a basic data structure and protocol to ensure interoperability among annotation systems, but they do not dictate how annotation tools and services are realized in terms of user interface features and functionality. Each of the standards serves a specific purpose:

These specifications provide the foundational material for a new generation of annotation tools on the Web while still leaving developers free to address specific use cases with tailored interfaces and services. This will encourage new innovations and the emergence of community-based best practices. For example, The W3C Working Group Note on Embedding Web Annotations in HTML, published concurrently with the three Web Annotation Recommendations, describes and illustrates just a few of the potential approaches for including annotations within HTML documents, serving as a starting point for further discussion, experimentation and development.

Getting This Far and What’s Ahead

The work on the annotation specifications started in 2009 with two independent groups, the Annotation Ontology and the Open Annotation Collaboration (both of which built upon the early W3C project: Annotea). In 2011, the two groups joined forces to help found the W3C Open Annotation Community Group. In 2013 this Community Group published a series of initial draft specifications. 2014 saw the creation of the Web Annotation Working Group to take the work through the standardization process and further the engagement with the web community, resulting in the specifications published on February 23rd, 2017.

As a diverse group of Web developers, publishers, and content creators note below, this work is and will be increasingly important as the volume and speed of information publishing continue to grow. The world has seen a dramatic increase in the spread of misinformation and “fake news”, and the web previously lacked a decentralized, trustworthy mechanism for fact checking and public discussion. Cory Doctorow, of the Electronic Frontier Foundation and the award-winning, describes the importance of annotation in this space:

We are absolutely delighted to see these recommendations land and endorse them in full. Though much hard work remains to be done, a formal standard for a universal web annotation layer is a critical step in the development of this promising new paradigm.

The broad, growing interest in Web annotation tools and services magnifies the likely impact of these specifications. As Dan Whaley, of and the Annotating All Knowledge coalition, notes, the publication of these Recommendations means that:

Annotation has now become a formal part of the Web —– the importance of which cannot be overstated. Over seventy major publishers and platforms under the Annotating All Knowledge coalition have pledged to include interoperable annotations as a collaborative framework over their content, and these implementations can now move forward with confidence. More importantly, browsers can now consider enabling users to listen for conversations on every page on the Web as a native capability.

Another domain that directly benefits from these standards is the multi-billion-dollar e-book publishing sector. Sharing annotations from your ePub reader — whether on your phone, computer, or dedicated device — and interacting with others regardless of their particular platform, enables massive and rapid improvements in teaching and learning at all levels. Patrick Johnston, Director of Platform Architecture, Product Technology, at the publisher John Wiley & Sons, Inc. describes the importance of the work:

We’ve used the Open Annotation Community Group’s Data Model at Wiley for some time. The Web Annotation specifications provide some needed improvements and additional guidance we’re working to implement and look forward to continued collaboration around annotation in digital publishing.

The traditions of scholarly discourse in sharing comments, annotations, etc., is a significant use case which can now be brought into the digital age of scholarly publishing. The same is true in areas like digital cultural heritage. Sheila Rabun is the Community and Communications Officer for the IIIF Consortium (International Image Interoperability Framework), currently consisting of 40 primarily academic and cultural heritage organizations including the national libraries of Britain, France, Israel, Norway, and Poland, and universities such as Stanford, Harvard, Cornell, Yale, Princeton, MIT, Oxford, Cambridge, and Tokyo. She describes the standards’ importance in that community:

The work done in IIIF could not happen without the groundbreaking specifications coming from the Open Annotation and Web Annotation groups. Annotation is a fundamental part of the IIIF model, and our most asked-for and discussed feature in implementations. It increases the visibility of digital cultural heritage and enables distributed online scholarship.

Acknowledgments and Further Information

We would like to thank everyone that has been involved throughout the process. In particular the previous co-chair, Frederick Hirsch; the W3C staff contacts, Ivan Herman and Doug Schepers; the other editors of the specifications, Benjamin Young and Paolo Ciccarese; the members of the Web Annotation Working Group, and the members of the Open Annotation Community Group. We are grateful for the past, present, and future work underway around these specifications.

For further information please contact the Chairs of the Web Annotation Working Group, Dr. Robert Sanderson (J. Paul Getty Trust, and Prof. Timothy Cole (University of Illinois at Urbana-Champaign, To comment on or discuss potential uses of the Web Annotation Recommendations, or to post news and updates about your implementations of these specifications, please join the W3C Open Annotation Community Group.

MediaWiki Restrict Editing on Pages

Published 23 Feb 2017 by Bob in Newest questions tagged mediawiki - Stack Overflow.

I am currently trying to set up a MediaWiki Site, but I don't want anyone to edit the pages.

I want to set it up so that only certain users can edit certain pages, does anyone know of any way that this can be achieved? I have had a look at the extensions but so far I am drawing a blank.

Anyone know of anything that I can use?

Any help is appreciated.

"Without the evolution of locomotion there would be no sex, no photosynthesis, no ecology"

Published 23 Feb 2017 by in New Humanist Articles and Posts.

Q&A with evolutionary biologist Matt Wilkinson.

Updates to DigitalOcean Two-factor Authentication

Published 22 Feb 2017 by DigitalOcean in DigitalOcean Blog.

Today we'd like to talk about security.

We know how challenging it can be to balance security and usability. The user experience around security features can often feel like an afterthought, but we believe that shouldn't be the case. Usability is just as important when it comes to security as any other part of your product because added friction can lead users to make less-secure choices. Today, we want to share with you some updates we rolled out this week to our two-factor login features to make them easier to use.

Our previous version required both SMS and an authenticator app to enable two-factor authentication. While SMS can work in a crunch, it's no longer as secure as it once was, delivery for our international customers wasn't always reliable, and tying both methods for authentication to the same mobile device definitely wasn't a great experience for anyone whose phone was unavailable.

Our new two-factor authentication features allow developers to choose between an authenticator app or SMS as a primary method, and between downloadable codes, authenticator app, or SMS as backup methods. This way SMS stays an option, but isn't a necessary part of securing access to your DigitalOcean account.

Add backup methods

To take a look at the changes and enable it on your account, simply navigate to Settings and click the link in your profile to "Enable two-factor authentication."

Enable two-factor authentication

Making two-factor authentication a little easier and more broadly available is just a first step. We believe securing access to your infrastructure should be as simple as it is to spin up a few Droplets and a Load Balancer.

Do you have any suggestions for how we can help make security easier? We want to hear from you. We're already considering features like YubiKey support. What else would you like to see? Please reach out to us on our UserVoice or let us know in the comments below.

Nick Vigier - Director of Security
Josh Viney - Product Manager, Customer Experience

Mediawiki doesn't send any email

Published 22 Feb 2017 by fpiette in Newest questions tagged mediawiki - Stack Overflow.

My Mediawiki installation (1.28.0, PHP 7.0.13) doesn't send any email and yet there is no error reported. I checked using Special:EmailUser page.

What I have tryed:

1) Command line SendMail can send email without problem.

2) A simple PHP script to send a mail using PHP's mail() function. It works.

3) I have turned PHP mail log. There is a normal log line for each Mediawiki email "sent", but nothing is actually sent.

Additional info:

PHP is configured (correctly since it works) to send email using Linux SendMail. MediaWiki is not configured to use direct SMTP.

Any suggestion apreciated. Thanks.

MediaWiki on Subdomain (.htaccess rewrite)

Published 21 Feb 2017 by LG-Dev in Newest questions tagged mediawiki - Stack Overflow.

I am using an Apache Server (cant config apache root files) and running my core website (Invision Power) in the root domain "". We decided to expand our services with a wiki using MediaWiki which is installed and can currently be reached on "".

I am utterly noobish with .htacess and Rewrite Conds/Rules and looking for help! We want our wiki to be access via - and this URL should NOT change in the adressbar. Each page ( should be accessed like this.

Please keep in mind that we want our core website keep working as it did for years now. So and any other folders should not be affected by the Rewrite Rule.

Can someone please help - do you need any further information??


Editing MediaWiki pages in an external editor

Published 21 Feb 2017 by Sam Wilson in Sam's notebook.

I’ve been working on a MediaWiki gadget lately, for editing Wikisource authors’ metadata without leaving the author page. It’s fun working with and learning more about OOjs-UI, but it’s also a pain because gadget code is kept in Javascript pages in the MediaWiki namespace, and so every single time you want to change something it’s a matter of saving the whole page, then clicking ‘edit’ again, and scrolling back down to find the spot you were at. The other end of things—the re-loading of whatever test page is running the gadget—is annoying and slow enough, without having to do much the same thing at the source end too.

So I’ve added a feature to the ExternalArticles extension that allows a whole directory full of text files to be imported at once (namespaces are handled as subdirectories). More importantly, it also ‘watches’ the directories and every time a file is updated (i.e. with Ctrl-S in a text editor or IDE) it is re-imported. So this means I can have MediaWiki:Gadget-Author.js and MediaWiki:Gadget-Author.css open in PhpStorm, and just edit from there. I even have these files open inside a MediaWiki project and so autocompletion and documentation look-up works as usual for all the library code. It’s even quite a speedy set-up, luckily: I haven’t yet noticed having to wait at any time between saving some code, alt-tabbing to the browser, and hitting F5.

I dare say my bodged-together script has many flaws, but it’s working for me for now!

How to find related entries in SMW by value in subobject field

Published 20 Feb 2017 by Velaro in Newest questions tagged mediawiki - Stack Overflow.

Too poor documentation drives me crazy in mediawiki and semantic mediawiki. What does column subobject in smw_object_ids mean? How can I find something related to record which stores something like _QUERYgjdfghjsag9u05sdfa in specified above column?


And which data smw_proptable_hash supposed to hold? If I unserialize I see:

array (
  'smw_di_number' => '3acec8ed7529527ac33713b1668f31c2',
  'smw_di_blob' => 'c201d67c4b8317d31b05d38d796671d2',
  'smw_di_time' => 'eff3878694d4aee1e88eb979bbd30097',
  'smw_di_wikipage' => 'e474079e8c5fab4ec7197d6aaa884032',
  'smw_fpt_ask' => 'e721ae2cb8f49309e10a27467306644c',
  'smw_fpt_inst' => 'c7af3f2c8f2f5276c1284b3855358979',
  'smw_fpt_sobj' => '7fe51e1a5b9c41d770d3dd8b1e1a16fa',
  'smw_fpt_mdat' => 'a400d86be3f69fbb788c4cfcdddaf077',
  'smw_fpt_cdat' => 'd063996afa76760ea758a1ab13deb191',

But none of them I can't find in specified tables.

When is a ban not a ban?

Published 20 Feb 2017 by in New Humanist Articles and Posts.

Cries of censorship followed reports that London students were demanding white philosophers be dropped from the curriculum. But there was more to the story.

Mediawiki doesn't send any email

Published 19 Feb 2017 by fpiette in Newest questions tagged mediawiki - Ask Ubuntu.

My mediawiki installation (1.28.0, PHP 7.0.13) doesn't send any email and yet there is no error emitted. I checked using Special:EmailUser page.

What I have tryed: 1) A simple PHP script to send a mail using PHP's mail() function. It works. 2) I have turned PHP mail log. There is a normal line for each Mediawiki email "sent".

PHP is configured (correctly since it works) to send email using Linux SendMail. MediaWiki is not configured to use direct SMTP.

Any suggestion appreciated. Thanks.

Issue with Composer install of Semantic MediaWiki

Published 19 Feb 2017 by mathguyjohn in Newest questions tagged mediawiki - Stack Overflow.

I have a new (installed yesterday) installation of MediaWiki and am trying to install the Semantic MediaWiki plugin.

I tried following the instructions at mediawiki, but here's what happened when I tried to install composer-merge-plugin:

$ composer require wikimedia/composer-merge-plugin
Using version ^1.3 for wikimedia/composer-merge-plugin
./composer.json has been updated
> ComposerHookHandler::onPreUpdate
Loading composer repositories with package information
Updating dependencies (including require-dev)
Your requirements could not be resolved to an installable set of packages.

  Problem 1
    - remove mediawiki/core No version set (parsed as 1.0.0)|remove mediawiki/semantic-media-wiki 2.4.6
    - don't install mediawiki/semantic-media-wiki 2.4.6|remove mediawiki/core No version set (parsed as 1.0.0)
    - Installation request for mediawiki/core No version set (parsed as 1.0.0) -> satisfiable by mediawiki/core[No version set (parsed as 1.0.0)].
    - Installation request for mediawiki/semantic-media-wiki (installed at 2.4.6, required as >=2.4) -> satisfiable by mediawiki/semantic-media-wiki[2.4.6].

Installation failed, reverting ./composer.json to its original content.

So instead, I just edited composer.local.json to the following:

    "require": {
        "mediawiki/sub-page-list": ">=1.0",
        "mediawiki/semantic-media-wiki": ">=2.4"
    "extra": {
        "merge-plugin": {
            "include": [

and ran composer update. I get a similar error:

$ composer update
> ComposerHookHandler::onPreUpdate
Loading composer repositories with package information
Updating dependencies (including require-dev)
Your requirements could not be resolved to an installable set of packages.

  Problem 1
    - remove mediawiki/core No version set (parsed as 1.0.0)|remove mediawiki/semantic-media-wiki 2.4.6
    - don't install mediawiki/semantic-media-wiki 2.4.0|remove mediawiki/core No version set (parsed as 1.0.0)
    - don't install mediawiki/semantic-media-wiki 2.4.1|remove mediawiki/core No version set (parsed as 1.0.0)
    - don't install mediawiki/semantic-media-wiki 2.4.2|remove mediawiki/core No version set (parsed as 1.0.0)
    - don't install mediawiki/semantic-media-wiki 2.4.3|remove mediawiki/core No version set (parsed as 1.0.0)
    - don't install mediawiki/semantic-media-wiki 2.4.4|remove mediawiki/core No version set (parsed as 1.0.0)
    - don't install mediawiki/semantic-media-wiki 2.4.5|remove mediawiki/core No version set (parsed as 1.0.0)
    - don't install mediawiki/semantic-media-wiki 2.4.6|remove mediawiki/core No version set (parsed as 1.0.0)
    - Installation request for mediawiki/core No version set (parsed as 1.0.0) -> satisfiable by mediawiki/core[No version set (parsed as 1.0.0)].
    - Installation request for mediawiki/semantic-media-wiki >=2.4 -> satisfiable by mediawiki/semantic-media-wiki[2.4.0, 2.4.1, 2.4.2, 2.4.3, 2.4.4, 2.4.5, 2.4.6].

The instructions at senamtic mediawiki gives a similar error:

$ composer require mediawiki/semantic-media-wiki "~2.4" --update-no-dev
./composer.json has been updated
> ComposerHookHandler::onPreUpdate
Loading composer repositories with package information
Updating dependencies
Your requirements could not be resolved to an installable set of packages.

  Problem 1
    - remove mediawiki/core No version set (parsed as 1.0.0)|remove mediawiki/semantic-media-wiki 2.4.6
    - don't install mediawiki/semantic-media-wiki 2.4.0|remove mediawiki/core No version set (parsed as 1.0.0)
    - don't install mediawiki/semantic-media-wiki 2.4.1|remove mediawiki/core No version set (parsed as 1.0.0)
    - don't install mediawiki/semantic-media-wiki 2.4.2|remove mediawiki/core No version set (parsed as 1.0.0)
    - don't install mediawiki/semantic-media-wiki 2.4.3|remove mediawiki/core No version set (parsed as 1.0.0)
    - don't install mediawiki/semantic-media-wiki 2.4.4|remove mediawiki/core No version set (parsed as 1.0.0)
    - don't install mediawiki/semantic-media-wiki 2.4.5|remove mediawiki/core No version set (parsed as 1.0.0)
    - don't install mediawiki/semantic-media-wiki 2.4.6|remove mediawiki/core No version set (parsed as 1.0.0)
    - Installation request for mediawiki/core No version set (parsed as 1.0.0) -> satisfiable by mediawiki/core[No version set (parsed as 1.0.0)].
    - Installation request for mediawiki/semantic-media-wiki ~2.4 -> satisfiable by mediawiki/semantic-media-wiki[2.4.0, 2.4.1, 2.4.2, 2.4.3, 2.4.4, 2.4.5, 2.4.6].

Installation failed, reverting ./composer.json to its original content.

I haven't done anything to composer.json, but for completeness:

"name": "mediawiki/core",
        "description": "Free software wiki application developed by the Wikimedia Foundation and others",
        "keywords": ["mediawiki", "wiki"],
        "homepage": "",
        "authors": [
                        "name": "MediaWiki Community",
                        "homepage": ""
        "license": "GPL-2.0+",
        "support": {
                "issues": "",
                "irc": "irc://",
                "wiki": ""
        "require": {
                "composer/semver": "1.4.2",
                "cssjanus/cssjanus": "1.1.2",
                "ext-ctype": "*",
                "ext-iconv": "*",
                "ext-json": "*",
                "ext-mbstring": "*",
                "ext-xml": "*",
                "liuggio/statsd-php-client": "1.0.18",
                "mediawiki/at-ease": "1.1.0",
                "oojs/oojs-ui": "0.17.10",
                "oyejorge/less.php": "",
                "php": ">=5.5.9",
                "psr/log": "1.0.0",
                "wikimedia/assert": "0.2.2",
                "wikimedia/base-convert": "1.0.1",
                "wikimedia/cdb": "1.4.1",
                "wikimedia/cldr-plural-rule-parser": "1.0.0",
                "wikimedia/composer-merge-plugin": "1.3.1",
                "wikimedia/html-formatter": "1.0.1",
                "wikimedia/ip-set": "1.1.0",
                "wikimedia/php-session-serializer": "1.0.4",
                "wikimedia/relpath": "1.0.3",
                "wikimedia/running-stat": "1.1.0",
                "wikimedia/scoped-callback": "1.0.0",
                "wikimedia/utfnormal": "1.1.0",
                "wikimedia/wait-condition-loop": "1.0.1",
                "wikimedia/wrappedstring": "2.2.0",
                "zordius/lightncandy": "0.23"
        "require-dev": {
                "composer/spdx-licenses": "1.1.4",
                "jakub-onderka/php-parallel-lint": "0.9.2",
                "justinrainbow/json-schema": "~3.0",
                "mediawiki/mediawiki-codesniffer": "0.7.2",
                "monolog/monolog": "~1.18.2",
                "nikic/php-parser": "2.1.0",
                "nmred/kafka-php": "0.1.5",
                "phpunit/phpunit": "4.8.24",
                "wikimedia/avro": "1.7.7"
        "suggest": {
                "ext-apc": "Local data and opcode cache",
                "ext-fileinfo": "Improved mime magic detection",
                "ext-intl": "ICU integration",
                "ext-wikidiff2": "Diff accelerator",
                "monolog/monolog": "Flexible debug logging system",
                "nmred/kafka-php": "Send debug log events to kafka",
                "pear/mail": "Mail sending support",
                "pear/mail_mime": "Mail sending support",
                "pear/mail_mime-decode": "Mail sending support",
                "wikimedia/avro": "Binary serialization format used with kafka"
        "autoload": {
                "psr-0": {
                        "ComposerHookHandler": "includes/composer"
        "scripts": {
                "lint": "parallel-lint --exclude vendor",
                "phpcs": "phpcs -p -s",
                "fix": "phpcbf",
                "pre-install-cmd": "ComposerHookHandler::onPreInstall",
                "pre-update-cmd": "ComposerHookHandler::onPreUpdate",
                "test": [
                        "composer lint",
                        "composer phpcs"
        "config": {
                "optimize-autoloader": true,
                "prepend-autoloader": false
        "extra": {
                "merge-plugin": {
                        "include": [
                        "merge-dev": false

Also, why does it look like it's trying to remove mediawiki/core?

Mediawiki API: get recent changes with categories of each page, or only those recentchanges entries which are in a certain category

Published 17 Feb 2017 by Velaro in Newest questions tagged mediawiki - Stack Overflow.

I want to retrieve from Mediawiki the list of recent changes in a certain category. I am trying to use the recentchanges API; I would either need to be able to limit the results to that category, or for each recentchanges entry get the list of categories that page is in.

How to download the wikipedia articles that are listed in PetScan tool?

Published 17 Feb 2017 by Bhabani Mohapatra in Newest questions tagged mediawiki - Stack Overflow.

I had shortlisted a list of Wikipedia articles using the Petscan tool. Below is the link

I have used "Diseases & disorders" category from wikipedia with a depth value of 2. Approx 10000 articles were listed in the results.

My question is how do I download the articles to my computer. I am new to these things so need help.

Week #5: Politics and the Super Bowl – chewing a pill too big to swallow

Published 17 Feb 2017 by legoktm in The Lego Mirror.

For a little change, I'd like to talk about the impact of sports upon us this week. The following opinion piece was first written for La Voz, and can also be read on their website.

Super Bowl commercials have become the latest victim of extreme politicization. Two commercials stood out from the rest by featuring pro-immigrant advertisements in the midst of a political climate deeply divided over immigration law. Specifically, Budweiser aired a mostly fictional story of their founder traveling to America to brew, while 84 Lumber’s ad followed a mother and daughter’s odyssey to America in search of a better life.

The widespread disdain toward non-white outsiders, which in turn has created massive backlash toward these advertisements, is no doubt repulsive, but caution should also be exercised when critiquing the placement of such politicization. Understanding the complexities of political institutions and society are no doubt essential, yet it is alarming that every facet of society has become so politicized; ironically, this desire to achieve an elevated political consciousness actually turns many off from the importance of politics.

Football — what was once simply a calming means of unwinding from the harsh winds of an oppressive world — has now become another headline news center for political drama.

President George H. W. Bush and his wife practically wheeled themselves out of a hospital to prepare for hosting the game. New England Patriots owner, Robert Kraft, and quarterback, Tom Brady, received sharp criticism for their support of Donald Trump, even to the point of losing thousands of dedicated fans.

Meanwhile, the NFL Players Association publicly opposed President Trump’s immigration ban three days before the game, with the NFLPA’s president saying “Our Muslim brothers in this league, we got their backs.”

Let’s not forget the veterans and active service members that are frequently honored before NFL games, except that’s an advertisement too – the Department of Defense paid NFL teams over $5 million over four years for those promotions.

Even though it’s an America’s pastime, football, and other similar mindless outlets, provide the role of allowing us to escape whenever we need a break from reality, and for nearly three hours on Sunday, America got its break, except for those commercials. If we keep getting nagged about an issue, even if we’re generally supportive, t will eventually become incessant to the point of promoting nihilism.

When Meryl Streep spoke out at the Golden Globes, she turned a relaxing event of celebrating fawning into a political shitstorm which redirected all attention back toward Trump controversies. Even she was mostly correct, the efficacy becomes questionable after such repetition as many will become desensitized.

Politics are undoubtedly more important than ever now, but for our sanity’s sake, let’s keep it to a minimum in football. That means commercials too.

Why are images not shown with https-wiki and Visual Editor?

Published 16 Feb 2017 by waanders in Newest questions tagged mediawiki - Stack Overflow.

We've a problem when using the Visual Editor with a https-protected MediaWiki wiki: images in a wikipage aren't displayed when editing a page where this does happen in Read mode.

View source of the page contains:

Console of the browser gives this error message:

GET https://localhost/images/5/5a/MyImage.jpg net::ERR_CONNECTION_REFUSED.

How can I fix this?

Regards, Jethro

Page loading as run(); after upgrading ubuntu to 16.04.2

Published 15 Feb 2017 by Shae Tomkiewicz in Newest questions tagged mediawiki - Stack Overflow.

I use mediawiki to run a small internal wiki page on an Ubuntu server. I upgraded the Ubuntu to 16.04.2 and now when I try to load my wiki page, it just says run();

I am assuming this is something to do with the apache2, but I will be honest in saying that I am not super familiar with Linux command line. Most of what I have done has been over many hours and lots of google.

Any help in this would be great, I'm hoping it is just something stupid. I checked the LocalSettings.php for my mediawiki and nothing seems to have changed on that end of the deal.

Simplest way to make a copy of images in a folder

Published 15 Feb 2017 by Peter Krauss in Newest questions tagged mediawiki - Stack Overflow.

There are a "hidden option" or other Wikimedia tool to do a simple image export?

I am using this so complex approach at terminal,

php maintenance/dumpUploads.php | \
   sed 's~mwstore://local-backend/local-public~./images~' > /tmp/localList.txt
mkdir ~/MyBackupFolder
cp $(cat /tmp/localList.txt) ~/MyBackupFolder

... some "XPTO option" to do direct

php maintenance/dumpUploads.php --XPTO ~/MyBackupFolder

with no sed...

MediaWiki extension compatibility "upstream sent too big header" error

Published 14 Feb 2017 by user193661 in Newest questions tagged mediawiki - Stack Overflow.

I have MediaWiki 1.28, PHP 7.1.1, php-fpm, nginx 1.10.3, and arch linux.

Mediawiki says it's compatible with php 7 but is not fully compatible with PHP 7.1.1. However, the error I'm getting I believe is a PHP 5.3 compatibility issue. I'm getting PHP "expected to be a reference" warnings for my mediawiki extensions. They were installed by default in the MediaWiki 1.28 installation.

I don't know to what extent the incompatibility affects my installation or what the appropriate solution is.

I was getting an nginx "upstream header too big" error that was causing 502 responses for some pages:

When I'm logged in and try to access certain urls like /MediaWiki:Common.css or Index.php?title=Special:Search&search=Common.css&fulltext=Search&profile=all then I get a 502 response. If I go to these urls while logged out, they return normal 200. I solved it via this post's advice: upstream sent too big header while reading response header from upstream.

Is the PHP compatibility issue the cause of the php-fpm buffer issue? Should I downgrade to php 7 or could I modify the extensions? Can my extensions be updated because this bug fix has been performed already?

This is the nginx block that runs the php files:

location ~ \.php$ {
   include /etc/nginx/fastcgi_params;
   fastcgi_buffers 16 16k; # Added this fix
   fastcgi_buffer_size 32k; # Added this fix
   fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
   fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
   fastcgi_param  QUERY_STRING $query_string;
   fastcgi_index /index.php;

And some of the journalctl:

Feb 12 18:51:09 hochine nginx[5471]: 2017/02/12 18:51:09 [error] 5474#5474: *115 FastCGI sent in stderr: "PHP message: PHP Warning:  Parameter 1 to CiteHooks::onResourceLoaderRegisterModules() expected to be a reference, value given in /usr/share/webapps/mediawiki/includes/Hooks.php on line 195" while reading response header from upstream, client:, server: mediawiki, request: "GET /load.php?debug=false&lang=en&modules=ext.pygments%7Cmediawiki.legacy.commonPrint%2Cshared%7Cmediawiki.sectionAnchor%7Cmediawiki.skinning.interface%7Cskins.vector.styles&only=styles&skin=vector HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm/php-fpm.sock:", host: "localhost", referrer: "http://localhost/wiki/MediaWiki:Common.css"
Feb 12 18:51:26 hochine nginx[5471]: 2017/02/12 18:51:26 [error] 5474#5474: *134 FastCGI sent in stderr: "PHP message: PHP Warning:  Parameter 1 to Poem::init() expected to be a reference, value given in /usr/share/webapps/mediawiki/includes/Hooks.php on line 195
Feb 12 18:51:26 hochine nginx[5471]: PHP message: PHP Warning:  Parameter 1 to SyntaxHighlight_GeSHi::onParserFirstCallInit() expected to be a reference, value given in /usr/share/webapps/mediawiki/includes/Hooks.php on line 195
Feb 12 18:51:26 hochine nginx[5471]: PHP message: PHP Warning:  Parameter 1 to Cite::clearState() expected to be a reference, value given in /usr/share/webapps/mediawiki/includes/Hooks.php on line 195
Feb 12 18:51:26 hochine nginx[5471]: PHP message: PHP Warning:  Parameter 2 to Cite::checkRefsNoReferences() expected to be a reference, value given in /usr/share/webapps/mediawiki/includes/Hooks.php on line 195
Feb 12 18:51:26 hochine nginx[5471]: PHP message: PHP Warning:  Parameter 1 to Cite::clearState() expected to be a reference, value given in /usr/share/webapps/mediawiki/includes/Hooks.php on line 195
Feb 12 18:51:26 hochine nginx[5471]: PHP message: PHP Warning:  Parameter 2 to Cite::checkRefsNoReferences() expected to be a reference, value given in /usr/share/webapps/mediawiki/includes/Hooks.php on line 195
Feb 12 18:51:26 hochine nginx[5471]: PHP message: PHP Warning:  Parameter 1 to Cite::clearState() expected to be a reference, value given in /usr
Feb 12 18:51:26 hochine nginx[5471]: 2017/02/12 18:51:26 [error] 5474#5474: *134 upstream sent too big header while reading response header from upstream, client:, server: mediawiki, request: "GET /index.php?title=Special:Search&search=Common.css&fulltext=Search&profile=all HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm/php-fpm.sock:", host: "localhost"

Not able to install mediawiki for ubuntu 16.04

Published 14 Feb 2017 by kavita nirmal in Newest questions tagged mediawiki - Stack Overflow.

I have used this link for MediaWiki installation. Used all steps up to "Configure Apache for Mediawiki".

While enter this command:

sudo a2ensite mediawiki sudo a2dissite 000-default

I'm getting this error:

Site mediawiki already enabled
ERROR: Site sudo does not exist!
ERROR: Site a2dissite does not exist!
Enabling site 000-default.
To activate the new configuration, you need to run:
  service apache2 reload

What is the problem?


Thanks for your Reply .Yes, I tried

  1. sudo a2ensite mediawiki
  2. sudo a2dissite 000-default

After first command I got

Site mediawiki already enabled

And after second command I got

Site 000-default disabled.
To activate the new configuration, you need to run:
service apache2 reload 

so ,I executed service apache2 reload the output is

service apache2 reload
apache2.service is not active, cannot reload.

Load Balancers: Simplifying High Availability

Published 13 Feb 2017 by DigitalOcean in DigitalOcean Blog.

Over the past five years, we've seen our community grow by leaps and bounds, and we've grown right alongside it. More and more of our users are managing complex workloads that require more resilience and need to be highly available. Our Floating IPs already enable you to implement an architecture that eliminates single points of failure, but we knew we could do better by bringing our "DO-Simple" approach to the problem.

So today, we are releasing Load Balancers—a fully managed, highly available service that you can deploy as easily as a Droplet.

Our goal is to provide simple and intuitive tools that let your team launch, scale, and manage production applications of any size. With our Load Balancers, just choose a region and which Droplets will receive the traffic. We take care of the rest.

Load Balancers cost $20/month with no additional bandwidth charges and are available in all DigitalOcean regions.


For more details, see this overview on our Community site.

Simplified Service Discovery

Your Load Balancer will distribute incoming traffic across your Droplets, allowing you to build more reliable and performant applications by creating redundancy. You can add target Droplets to a Load Balancer by either choosing specific Droplets, or choosing a tag used by a group of Droplets.

With tags, scaling your application horizontally becomes easy. Launch a new Droplet with the tag applied, and it will be automatically added to your Load Balancer's backend pool, ready to receive traffic. Remove the tag, and the Droplet will be removed from the backend pool.

Control panel

Get started by following this step-by-step guide on our Community site.

Security & SSL Options

We didn't forget about security! Here's how Load Balancers' measure up:

If you're configuring a Load Balancer instance to use SSL termination, keep in mind that any Droplet using Shared Private Networking connected to the Load Balancer will have traffic sent to its private IP. Otherwise, it will use the Droplet's public IP. (For full control and end-to-end encryption, choose the "SSL passthrough" option.)

Learn more about configuring either SSL termination or SSL passthrough with our Community tutorials.

Coming Soon

We already have many Load Balancer improvements planned. Some features you will see soon include:

Load Balancers are just the beginning. Our 2017 roadmap is focused on bringing the "DO-Simple" experience to more complex, production workloads. Your feedback will help us as we improve Load Balancers and roll out more features, including new storage, security, and networking capabilities. Let us know what you think in the comments!

Week #4: 500 for Mr. San Jose Shark

Published 9 Feb 2017 by legoktm in The Lego Mirror.

He did it: Patrick Marleau scored his 500th career goal. He truly is Mr. San Jose Shark.

I had the pleasure of attending the next home game on Saturday right after he reached the milestone in Vancouver, and nearly lost my voice chearing for Marleau. They mentioned his accomplishment once before the game and again during a break, and each time Marleau would only stand up and acknowledge the crowd cheering for him when he realized they would not stop until he did.

He's had his ups and downs, but he's truly a team player.

“I think when you hit a mark like this, you start thinking about everyone that’s helped you along the way,” Marleau said.

And on Saturday at home, Marleau assisted on both Sharks goals, helping out his teammates who had helped Marleau score his over the past two weeks.

Congrats Marleau, and thanks for the 20 years of hockey. Can't wait to see you raise the Cup.

W3C expresses concerns on visa suspension that may hurt our worldwide collaboration

Published 7 Feb 2017 by Coralie Mercier in W3C Blog.

W3C statement in the wake of recent changes that may hurt our worldwide collaboration:

Global cooperation is critical to the development of the World Wide Web. Contributors to and implementors of Web standards come together from around the world to specify and build an interoperable platform for information exchange, the Open Web Platform. Our Team and Offices, our Community as well, are distributed. We are stronger for the participation of contributors from diverse backgrounds and nationalities.

We at the World Wide Web Consortium (W3C) have therefore been watching with concern the recent changes to United States travel policy, as have fellow Internet organizations ACM, IETF, ICANN, ISOC, and USENIX. While we do much of our work online, we also rely on face-to-face meetings to help in building consensus and moving work forward.

We ourselves are looking at ways to ensure the inclusivity of all of our meetings including our premier face-to-face event TPAC 2017, scheduled for November in Burlingame, California. While we currently are not in a position to move our TPAC meeting, we continue to look at the situation as it evolves and are looking into how we can enhance remote participation.


Published 7 Feb 2017 by Sam Wilson in Sam's notebook.

I’m heading to MediaWiki with Stevo.

New feature for ia-upload

Published 6 Feb 2017 by Sam Wilson in Sam's notebook.

I have been working on an addition to the IA Upload tool these last few days, and it’s ready for testing. Hopefully we’ll merge it tomorrow or the next day.

This is the first time I’ve done much work with the internal structure of DjVu files, and really it’s all been pretty straight-forward. A couple of odd bits about matching element and page names up between things, but once that was sorted it all seems to be working as it should.

It’s a shame that the Internet Archive has discontinued their production of DjVu files, but I guess they’ve got their reasons, and it’s not like anyone’s ever heard of DjVu anyway. I don’t suppose anyone other than Wikisource was using those files. Thankfully they’re still producing the DjVu XML that we need to make our own DjVus, and it sounds like they’re going to continue doing so (because they use the XML to produce the text versions of items).


Published 2 Feb 2017 by Sam Wilson in Sam's notebook.

Oops! I’ve set a Github/Travis build into an infinite loop. :-(


Published 2 Feb 2017 by Sam Wilson in Sam's notebook. sounds like a cool thing built on top of the existing blogosphere, allowing anyone to microblog (i.e. tweet) from the comfort of their own personally-controlled blog installation (e.g. WordPress).

Week #3: All-Stars

Published 2 Feb 2017 by legoktm in The Lego Mirror.

via /u/PAGinger on reddit

Last weekend was the NHL All-Star game and skills competition, with Brent Burns, Martin Jones, and Joe Pavelski representing the San Jose Sharks in Los Angeles. And to no one's surprise, they were all booed!

Pavelski scored a goal during the tournament for the Pacific Division, and Burns scored during the skills competition's "Four Line Challenge". But since they represented the Pacific, we have to talk about the impossible shot Mike Smith made.

And across the country, the 2017 NFL Pro Bowl (their all-star game) was happening at the same time. The Oakland Raiders had seven Pro Bowlers (tied for most from any team), and the San Francisco 49ers had...none.

In the meantime the 49ers managed to hire a former safety with no General Manager-related experience as their new GM. It's really not clear what Jed York, the 49ers owner, is trying out here, and why he would sign John Lynch to a six year contract.

But really, how much worse could it get for the 49ers?

Data on the Web? Here’s How

Published 31 Jan 2017 by Phil Archer in W3C Blog.

Este artigo em português

I want a revolution.

Not a political one, and certainly not a violent one, but a revolution nonetheless.

A revolution in the way people think about the way data is shared on the Web, whether openly or not. This is where I typically start talking about people using the Web as a glorified USB stick. That is, using the Web to do no more than transfer data from A to B in a way that could be just as easily achieved by putting it on a USB stick and sending it through the post.

A photograph of a USB stick set on a librarian's index card
Photo credit: Rosie Sutton

The Web is so much more than that. To quote from the Architecture of the World Wide Web, it’s: “… a remarkable information space of interrelated resources, growing across languages, cultures, and media.” It’s the connectivity of ideas and facts between people who are unknown to each other that is so exciting and that has such profound implications.

But how to do it right? As Rebecca Williams of GovEx, formerly of, tweeted recently: “looking at ‘open data portals’ to gather your best practices in metadata and licensing is very backwards, they’re almost all doing it wrong.” I wouldn’t go as far as to say they’re almost all doing it wrong, but it is true that there is a need for a reference for how to do it right.

Which is what today’s Data on the Web Best Practices Recommendation is all about.

It’s taken 4 years from planning the workshop, to setting up the Working Group, to working out what the heck the scope really is and fixing the relationship with the (externally funded) Share-PSI Project, to honing a set of 35 Best Practices that are actionable without being over prescriptive.

The first one is absolute motherhood and apple pie: provide metadata. It sounds silly, and one can argue that if you’re sharing data on the Web and not providing metadata then you’re probably quite keen for no one to find it, let alone use it. Best Practice 9 says “Use Persistent URIs as identifiers of datasets” and BP 10 says “Use persistent URIs as identifiers within datasets.” In my view these two are at the heart of the difference between using the Web as a glorified USB stick and using it as a global information space. The implementation report cites many examples of this, from the Brazilian federal government’s Compras públicas do governo federal to Macedonia’s St Cyril and Methodius University’s Linked Drugs project, from the Auckland War Museum’s API to the UK’s Acropolis project.

Each of the BPs is classified according to one or more benefits

There are Best Practices around areas you’d probably expect, like provenance and licensing, and maybe less obviously things like data enrichment and data archiving. These are topics in their own right of course and the general Data on the Web Best Practices document can only act as a basis. At W3C, further work is currently under way, for example, to standardize ODRL for machine readable permissions and obligations, and the Spatial Data on the Web WG is building directly on DWBP in its own Best Practice document. There’s always more to say – and there are always different ways of working.

Data on the Web Best Practices doesn’t prescribe the use of any particular technology other than Web basics. Each BP has an intended outcome, such as BP14’s “As many users as possible will be able to use the data without first having to transform it into their preferred format.” Or BP23’s “Developers will have programmatic access to the data for use in their own applications, with data updated without requiring effort on the part of consumers. Web applications will be able to obtain specific data by querying a programmatic interface.” But from then on, each BP offers possible approaches to implementation with some examples. If you can achieve the same intended outcome with a different technology, go ahead, you’re still following best practice.

The Working Group as a whole was chartered not just to create a set of Best Practices but to help foster an ecosystem of data sharing. Part of this is addressed in two vocabularies, one for describing the usage of a dataset (through use in an application, citation in someone else’s work etc.) and one for describing quality. Quality is rarely an objective fact but the vocabulary provides a framework in which statements about quality can be made.

DWBP is not just about government data. GS1, the body behind the world’s product bar codes, contributed to the work and has already leveraged it in their proposed GS1 SmartSearch. In the world of scientific research, the Pacific Northwest National Laboratory, is advocating the work in its publishing of climate simulation datasets on the Earth System Grid Federation, the Atmosphere to Electrons (A2e) Data Archive and in its Portal (DAP). Los Alamos and Lawrence Berkeley National Laboratories are also using the document to improve the way data is shared online. Importantly for research data, W3C’s Data on the Web Best Practices are fully aligned with the FAIR principles

It’s always encouraging when you hear other people referring to your work and DWBP got a lot of mentions at the Smart Descriptions and Smarter Vocabularies workshop (SDSVoc) last year (report soon, I promise). And we’ve had compliments from many quarters. I’d like to end by noting two unusual features of the Working Group. First, all of the three active chairs and three of the group’s editors are women. Second this was the first W3C WG that had such strong participation from Brazil.

It’s been a privilege to work with such a terrific group of revolutionaries from all over the world.

Four people standing in fromt of a roll up poster saying F2F Sao Paulo
Meet the editors & team contact: (from L to R) Newton Calegari, Caroline Burle, Phil Archer, Bernadette Lóscio

Abandoned code projects

Published 29 Jan 2017 by Sam Wilson in Sam's notebook.

One of the sad things about open source software is the process of working on some code, feeling like it’s going somewhere good and is useful to people, but then at some point having to abandon it. Normally just because life moves on and the higher-priority code always has to be the stuff that earns an income, or just that there are only so many slots for projects in my brain.

I feel this way about Tabulate, the WordPress plugin I was working on until a year ago, and about a few Dokuwiki plugins that I used to maintain. All were good fun to work on, and served reasonably useful places in some people’s websites. But I don’t have time, especially as it takes even more time & concentration to switch between completely separate codebases and communities — the latter especially. So these projects just languish, usually until some wonderful person comes along on Github and asks to take over as maintainer.

I am going to try to keep up with Tabulate, however. It doesn’t need that much work, and the WordPress ecosystem is a world that I actually find quite rewarding to inhabit (I know lots of people wouldn’t agree with that, and certainly there’s a commercial side to it that I find a bit tiring).

Not this morning, though, but maybe later this week… :-)

Updates to

Published 29 Jan 2017 by legoktm in The Lego Mirror.

Over the weekend I migrated and associated services over to a new server. It's powered by Debian Jessie instead of the slowly aging Ubuntu Trusty. Most services were migrated with no downtime by rsync'ing content over and the updating DNS. Only had some downtime due to needing to stop the service before copying over the database.

I did not migrate my IRC bouncer history or configuration, so I'm starting fresh. So if I'm no longer in a channel, feel free to PM me and I'll rejoin!

At the same time I moved the main homepage to MediaWiki. Hopefully that will encourage me to update the content on it more often.

Finally, the tor relay node I'm running was moved to a separate server entirely. I plan on increasing the resources allocated to it.

Wikisource hangout notes

Published 29 Jan 2017 by Sam Wilson in Sam's notebook.

The notes from the Wikisource hangout last night are now on Meta.


Published 26 Jan 2017 by legoktm in The Lego Mirror.

The only person who would dare upstage Patrick Marleau's four goal night is Randy Hahn, with his hilarious call after Marleau's third goal to finish a natural hat-trick: "NATTY HATTY FOR PATTY". And after scoring another, Marleau became the first player to score four goals in a single period since the great Mario Lemieux did in 1997. He's also the third Shark to score four goals in a game, joining Owen Nolan (no video available, but his hat-trick from the 1997 All-Star game is fabulous) and Tomáš Hertl.

Marleau is also ready to hit his next milestone of 500 career goals - he's at 498 right now. Every impressive stat he puts up just further solidifies him as one of the greatest hockey players of his generation. But he's still missing the one achievement that all the greats need - a Stanley Cup. The Sharks made their first trip to the Stanley Cup Finals last year, but realistically had very little chance of winning; they simply were not the better team.

The main question these days is how long Marleau and Joe Thornton will keep playing for, and if they can stay healthy until they eventually win that Stanley Cup.

Discuss this post on Reddit.

Bromptons in Museums and Art Galleries

Published 23 Jan 2017 by Andy Mabbett in Andy Mabbett, aka pigsonthewing.

Every time I visit London, with my Brompton bicycle of course, I try to find time to take in a museum or art gallery. Some are very accommodating and will cheerfully look after a folded Brompton in a cloakroom (e.g. Tate Modern, Science Museum) or, more informally, in an office or behind the security desk (Bank of England Museum, Petrie Museum, Geffrye Museum; thanks folks).

Brompton bicycle folded

When folded, Brompton bikes take up very little space

Others, without a cloakroom, have lockers for bags and coats, but these are too small for a Brompton (e.g. Imperial War Museum, Museum of London) or they simply refuse to accept one (V&A, British Museum).

A Brompton bike is not something you want to chain up in the street, and carrying a hefty bike-lock would defeat the purpose of the bike’s portability.

Jack Wills, New Street (geograph 4944811)

This Brompton bike hire unit, in Birmingham, can store ten folded bikes each side. The design could be repurposed for use at venues like museums or galleries.

I have an idea. Brompton could work with museums — in London, where Brompton bikes are ubiquitous, and elsewhere, though my Brompton and I have never been turned away from a museum outside London — to install lockers which can take a folded Brompton. These could be inside with the bag lockers (preferred) or outside, using the same units as their bike hire scheme (pictured above).

Where has your Brompton had a good, or bad, reception?


Less than two hours after I posted this, Will Butler-Adams, MD of Brompton, >replied to me on Twitter:

so now I’m reaching out to museums, in London to start with, to see who’s interested.

The post Bromptons in Museums and Art Galleries appeared first on Andy Mabbett, aka pigsonthewing.

Running with the Masai

Published 23 Jan 2017 by Tom Wilson in tom m wilson.

What are you going to do if you like tribal living and you’re in the cold winter of the Levant?  Head south to the Southern Hemisphere, and to the wilds of Africa. After leaving Israel and Jordan that is exactly what I did. I arrived in Nairobi and the first thing which struck me was […]

Wikisource Hangout

Published 23 Jan 2017 by Sam Wilson in Sam's notebook.

I wonder how long it takes after someone first starts editing a Wikimedia project that they figure out that they can read lots of Wikimedia news on — and when, after that, they realise they can also post to the news there? (At which point they probably give up if they haven’t already got a blog.)

Anyway, I forgot that I can post news, but then I remembered. So:

There’s going to be a Wikisource meeting next weekend (28 January, on Google Hangouts), if you’re interested in joining:

Week #1: Who to root for this weekend

Published 22 Jan 2017 by legoktm in The Lego Mirror.

For the next 10 weeks I'll be posting sports content related to Bay Area teams. I'm currently taking an intro to features writing class, and we're required to keep a blog that focuses on a specific topic. I enjoy sports a lot, so I'll be covering Bay Area sports teams (Sharks, Earthquakes, Raiders, 49ers, Warriors, etc.). I'll also be trialing using Reddit for comments. If it works well, I'll continue using it for the rest of my blog as well. And with that, here goes:

This week the Green Bay Packers will be facing the Atlanta Falcons in the very last NFL game at the Georgia Dome for the NFC Championship. A few hours later, the Pittsburgh Steelers will meet the New England Patriots in Foxboro competing for the AFC Championship - and this will be only the third playoff game in NFL history featuring two quarterbacks with multiple Super Bowl victories.

Neither Bay Area football team has a direct stake in this game, but Raiders and 49ers fans have a lot to root for this weekend.

49ers: If you're a 49ers fan, you want to root for the Falcons to lose. This might sound a little weird, but currently the 49ers are looking to hire Falcons offensive coordinator, Kyle Shanahan, as their new head coach. However, until the Falcons' season ends, they cannnot officially hire him. And since 49ers general manager search depends upon having a head coach, they can get a head start by two weeks if the Falcons lose this weekend.

Raiders: Do you remember the Tuck Rule Game? If so, you'll still probably be rooting for anyone but Tom Brady, quarterback for the Patriots. If not, well, you'll probably want to root for the Steelers, who eliminated Raiders' division rival Kansas City Chiefs last weekend in one of the most bizarre playoff games. Even though the Steelers could not score a single touchdown, they topped the Chiefs two touchdowns with a record six field goals. Raiders fans who had to endure two losses to the Chiefs this season surely appreciated how the Steelers embarrassed the Chiefs on prime time television.

Discuss this post on Reddit.

Four Stars of Open Standards

Published 21 Jan 2017 by Andy Mabbett in Andy Mabbett, aka pigsonthewing.

I’m writing this at UKGovCamp, a wonderful unconference. This post constitutes notes, which I will flesh out and polish later.

I’m in a session on open standards in government, convened by my good friend Terence Eden, who is the Open Standards Lead at Government Digital Service, part of the United Kingdom government’s Cabinet Office.

Inspired by Tim Berners-Lee’s “Five Stars of Open Data“, I’ve drafted “Four Stars of Open Standards”.

These are:

  1. Publish your content consistently
  2. Publish your content using a shared standard
  3. Publish your content using an open standard
  4. Publish your content using the best open standard

Bonus points for:

Point one, if you like is about having your own local standard — if you publish three related data sets for instance, be consistent between them.

Point two could simply mean agreeing a common standard with other items your organisation, neighbouring local authorities, or suchlike.

In points three and four, I’ve taken “open” to be the term used in the “Open Definition“:

Open means anyone can freely access, use, modify, and share for any purpose (subject, at most, to requirements that preserve provenance and openness).

Further reading:

The post Four Stars of Open Standards appeared first on Andy Mabbett, aka pigsonthewing.

2017: What's Shipping Next on DigitalOcean

Published 17 Jan 2017 by DigitalOcean in DigitalOcean Blog.

The start of a new year is a great opportunity to reflect on the past twelve months. At the beginning of 2016, I began advising the team at DigitalOcean and I knew the company and the products were something special. I joined DigitalOcean as the CTO in June 2016 and our engineering team was scaling rapidly, teams were organizing around new product initiatives, and we were gearing up for the second product to be shipped in our company's history: Block Storage.

Going from one great product to two in 2016 was a major shift for DigitalOcean and the start of what's going to be an exciting year of new capabilities to support larger production workloads in 2017.

2016 achievements

The "DO-Simple" Way

In the coming year, we are not only strengthening the foundation of our platform to increase performance and enable our customers to scale, we are also broadening our product portfolio to offer services we know teams of developers need. However, we are not just bringing new products and features to market; we are ensuring that what we offer maintains the "DO-Simple" standard that our customers expect and appreciate.

What does DO-Simple mean? At DigitalOcean, we are committed to sticking to our mission to simplify infrastructure and create an experience that developers love. We are challenging the status quo and disrupting the way developers think about using the cloud. This is an exciting chapter for our company and something we believe sets us apart in the market. We want developers to focus on building their applications, not waste time and money on setting up, configuring, and monitoring. Writing great software is hard. The cloud that software runs on should be easy.

2017 Product Horizon

With distributed systems spread over thousands of servers in 12 datacenters across the world, we have valuable operational knowledge on managing infrastructure at scale. We believe our users can leverage the work we do in-house to manage their own infrastructure. Just this month, we released an open source agent that lets developers get a better picture of the health of their Droplets. We also added several new graphs to the Droplet graphs page and made the existing graphs much more precise. Having visibility into your infrastructure is only the first step, knowing when to act on that information is just as important. That's why later this quarter, we will be releasing additional monitoring capabilities and tools to better manage your Droplets in the DO-Simple way you expect. (Learn more about Monitoring on DigitalOcean.)

As we approach one million registered users and more than 40,000 teams of developers over the last 5 years, it is critical that we give our users the tools, scale and performance that are required to seamlessly launch, scale and manage any size production application. We have more and more customers managing complex workloads and large environments on DigitalOcean that would benefit from a Load Balancer. You can now request early access to Load Balancers on DigitalOcean here.

We aren't stopping at just adding load balancing to our offerings in 2017. We have a number of important capabilities we're working on to to meet your high availability, data storage, security, and networking needs. Additionally, we will continue to iterate and invest in our Block Storage offering by making it available in more datacenter locations around the world.

Feedback Matters

We believe in building a customer-first organization that is committed to transparency. Therefore, I will continue to share more updates to our roadmap throughout the year. We have an iterative product development approach and engage our customers in many ways as part of the product prioritization and design process. The developer's voice matters at DigitalOcean. We don't assume that we have all the answers. Talking with and listening to the people who use our cloud day in and day out plays a major role in creating the simple and intuitive developer experience we strive to maintain. In the months to come, we will be engaging our customers through each product beta and general release.

Excited about what's coming? Have ideas about what we should do next? Share your thoughts with us in the comments below.

Happy coding,

Julia Austin, CTO

Supporting Software Freedom Conservancy

Published 17 Jan 2017 by legoktm in The Lego Mirror.

Software Freedom Conservancy is a pretty awesome non-profit that does some great stuff. They currently have a fundraising match going on, that was recently extended for another week. If you're able to, I think it's worthwhile to support their organization and mission. I just renewed my membership.

Become a Conservancy Supporter!

A Doodle in the Park

Published 16 Jan 2017 by Dave Robertson in Dave Robertson.

The awesome Carolyn White is doing a doodle a day, but in this case it was a doodle of Dave, with Tore and The Professor, out in the summer sun of the Manning Park Farmers and Artisan Market.


MediaWiki - powered by Debian

Published 16 Jan 2017 by legoktm in The Lego Mirror.

Barring any bugs, the last set of changes to the MediaWiki Debian package for the stretch release landed earlier this month. There are some documentation changes, and updates for changes to other, related packages. One of the other changes is the addition of a "powered by Debian" footer icon (drawn by the amazing Isarra), right next to the default "powered by MediaWiki" one.

Powered by Debian

This will only be added by default to new installs of the MediaWiki package. But existing users can just copy the following code snippet into their LocalSettings.php file (adjust paths as necessary):

# Add a "powered by Debian" footer icon
$wgFooterIcons['poweredby']['debian'] = [
    "src" => "/mediawiki/resources/assets/debian/poweredby_debian_1x.png",
    "url" => "",
    "alt" => "Powered by Debian",
    "srcset" =>
        "/mediawiki/resources/assets/debian/poweredby_debian_1_5x.png 1.5x, " .
        "/mediawiki/resources/assets/debian/poweredby_debian_2x.png 2x",

The image files are included in the package itself, or you can grab them from the Git repository. The source SVG is available from Wikimedia Commons.


Published 11 Jan 2017 by fabpot in Tags from Twig.


Published 11 Jan 2017 by fabpot in Tags from Twig.

Importing pages breaks category feature

Published 10 Jan 2017 by Paul in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I just installed MediaWiki 1.27.1 and setup completes without issue on a server with Ubuntu 16.04, nginx, PHP 5.6, and MariaDB 10.1.

I created an export file with a different wiki using the Special:Export page. I then imported the articles to the new wiki using the Special:Import page. The file size is smaller than any limits and time the operation takes to complete is much less than configured timeouts.

Before import, I have created articles and categories and everything works as expected.

However, after importing, when I create a category tag on an article, clicking the link to the category's page doesn't show the article in the category.

I am using this markup within the article to create the category:

[[Category:Category Name]]

Is this a bug or am I missing something?

Roundcube Webmail 1.3-beta out now

Published 5 Jan 2017 by Roundcube Webmail Dev Team in Roundcube Webmail Project News.

We’re proud to announce the beta release of the next major version 1.3 of Roundcube webmail. With this milestone we introduce some new features:

Plus security and deployment improvements:

And finally some code-cleanup:

IMPORTANT: The code-cleanup part brings major changes and possibly incompatibilities to your existing Roundcube installations. So please read the changelog carefully and thoroughly test your upgrade scenario.

Please note that Roundcube 1.3

  1. no longer runs on PHP 5.3
  2. no longer supports IE < 10 and old versions of Firefox, Chrome and Safari
  3. requires an SMTP server connection to send mails

That last item means you need to review your SMTP server settings as described in our wiki if you have set the smtp_server option to an empty value and are thus using PHP’s mail() function.

In case you’re running Roundcube directly from source, you now need to install the removed 3rd party javascript modules by executing the following install script:

$ bin/

See the complete Changelog and download the new packages from

Please note that this is a beta release and we recommend to test it on a separate environment. And don’t forget to backup your data before installing it.


Published 5 Jan 2017 by fabpot in Tags from Twig.

Big Tribes

Published 5 Jan 2017 by Tom Wilson in tom m wilson.

In Jerusalem yesterday I encountered three of the most sacred sites of some of the biggest religions on earth. First the Western Wall, the most sacred site for Jews worldwide. Then after some serious security checks and long wait in a line we were allowed up a long wooden walkway, up to the Temple Mount.   […]

A Year Without a Byte

Published 4 Jan 2017 by Archie Russell in

One of the largest cost drivers in running a service like Flickr is storage. We’ve described multiple techniques to get this cost down over the years: use of COS, creating sizes dynamically on GPUs and perceptual compression. These projects have been very successful, but our storage cost is still significant.
At the beginning of 2016, we challenged ourselves to go further — to go a full year without needing new storage hardware. Using multiple techniques, we got there.

The Cost Story

A little back-of-the-envelope math shows storage costs are a real concern. On a very high-traffic day, Flickr users upload as many as twenty-five million photos. These photos require an average of 3.25 megabytes of storage each, totalling over 80 terabytes of data. Stored naively in a cloud service similar to S3, this day’s worth of data would cost over $30,000 per year, and continue to incur costs every year.

And a very large service will have over two hundred million active users. At a thousand images each, storage in a service similar to S3 would cost over $250 million per year (or $1.25 / user-year) plus network and other expenses. This compounds as new users sign up and existing users continue to take photos at an accelerating rate. Thankfully, our costs, and every large service’s costs, are different than storing naively at S3, but remain significant.

Cost per byte have decreased, but bytes per image from iPhone-type platforms have increased. Cost per image hasn’t changed significantly.

Storage costs do drop over time. For example, S3 costs dropped from $0.15 per gigabyte month in 2009 to $0.03 per gigabyte-month in 2014, and cloud storage vendors have added low-cost options for data that is infrequently accessed. NAS vendors have also delivered large price reductions.

Unfortunately, these lower costs per byte are counteracted by other forces. On iPhones, increasing camera resolution, burst mode and the addition of short animations (Live Photos) have increased bytes-per-image rapidly enough to keep storage cost per image roughly constant. And iPhone images are far from the largest.

In response to these costs, photo storage services have pursued a variety of product options. To name a few: storing lower quality images or re-compressing, charging users for their data usage, incorporating advertising, selling associated products such as prints, and tying storage to purchases of handsets.

There are also a number of engineering approaches to controlling storage costs. We sketched out a few and cover three that we implemented below: adjusting thresholds on our storage systems, rolling out existing savings approaches to more images, and deploying lossless JPG compression.

Adjusting Storage Thresholds

As we dug into the problem, we looked at our storage systems in detail. We discovered that our settings were based on assumptions about high write and delete loads that didn’t hold. Our storage is pretty static. Users only rarely delete or change images once uploaded. We also had two distinct areas of just-in-case space. 5% of our storage was reserved space for snapshots, useful for undoing accidental deletes or writes, and 8.5% was held free in reserve. This resulted in about 13% of our storage going unused. Trade lore states that disks should remain 10% free to avoid performance degradation, but we found 5% to be sufficient for our workload. So we combined our our two just-in-case areas into one and reduced our free space threshold to that level. This was our simplest approach to the problem (by far), but it resulted in a large gain. With a couple simple configuration changes, we freed up more than 8% of our storage.

Adjusting storage thresholds

Extending Existing Approaches

In our earlier posts, we have described dynamic generation of thumbnail sizes and perceptual compression. Combining the two approaches decreased thumbnail storage requirements by 65%, though we hadn’t applied these techniques to many of our images uploaded prior to 2014. One big reason for this: large-scale changes to older files are inherently risky, and require significant time and engineering work to do safely.

Because we were concerned that further rollout of dynamic thumbnail generation would place a heavy load on our resizing infrastructure, we targeted only thumbnails from less-popular images for deletes. Using this approach, we were able to handle our complete resize load with just four GPUs. The process put a heavy load on our storage systems; to minimize the impact we randomized our operations across volumes. The entire process took about four months, resulting in even more significant gains than our storage threshold adjustments.

Decreasing the number of thumbnail sizes

Lossless JPG Compression

Flickr has had a long-standing commitment to keeping uploaded images byte-for-byte intact. This has placed a floor on how much storage reduction we can do, but there are tools that can losslessly compress JPG images. Two well-known options are PackJPG and Lepton, from Dropbox. These tools work by decoding the JPG, then very carefully compressing it using a more efficient approach. This typically shrinks a JPG by about 22%. At Flickr’s scale, this is significant. The downside is that these re-compressors use a lot of CPU. PackJPG compresses at about 2MB/s on a single core, or about fifteen core-years for a single petabyte worth of JPGs. Lepton uses multiple cores and, at 15MB/s, is much faster than packJPG, but uses roughly the same amount of CPU time.

This CPU requirement also complicated on-demand serving. If we recompressed all the images on Flickr, we would need potentially thousands of cores to handle our decompress load. We considered putting some restrictions on access to compressed images, such as requiring users to login to access original images, but ultimately found that if we targeted only rarely accessed private images, decompressions would occur only infrequently. Additionally, restricting the maximum size of images we compressed limited our CPU time per decompress. We rolled this out as a component of our existing serving stack without requiring any additional CPUs, and with only minor impact to user experience.

Running our users’ original photos through lossless compression was probably our highest-risk approach. We can recreate thumbnails easily, but a corrupted source image cannot be recovered. Key to our approach was a re-compress-decompress-verify strategy: every recompressed image was decompressed and compared to its source before removing the uncompressed source image.

This is still a work-in-progress. We have compressed many images but to do our entire corpus is a lengthy process, and we had reached our zero-new-storage-gear goal by mid-year.

On The Drawing Board

We have several other ideas which we’ve investigated but haven’t implemented yet.

In our current storage model, we have originals and thumbnails available for every image, each stored in two datacenters. This model assumes that the images need to be viewable relatively quickly at any point in time. But private images belonging to accounts that have been inactive for more than a few months are unlikely to be accessed. We could “freeze” these images, dropping their thumbnails and recreate them when the dormant user returns. This “thaw” process would take under thirty seconds for a typical account. Additionally, for photos that are private (but not dormant), we could go to a single uncompressed copy of each thumbnail, storing a compressed copy in a second datacenter that would be decompressed as needed.

We might not even need two copies of each dormant original image available on disk. We’ve pencilled out a model where we place one copy on a slower, but underutilized, tape-based system while leaving the other on disk. This would decrease availability during an outage, but as these images belong to dormant users, the effect would be minimal and users would still see their thumbnails. The delicate piece here is the placement of data, as seeks on tape systems are prohibitively slow. Depending on the details of what constitutes a “dormant” photo these techniques could comfortably reduce storage used by over 25%.

We’ve also looked into de-duplication, but we found our duplicate rate is in the 3% range. Users do have many duplicates of their own images on their devices, but these are excluded by our upload tools.  We’ve also looked into using alternate image formats for our thumbnail storage.    WebP can be much more compact than ordinary JPG but our use of perceptual compression gets us close to WebP byte size and permits much faster resize.  The BPG project proposes a dramatically smaller, H.265 based encoding but has IP and other issues.

There are several similar optimizations available for videos. Although Flickr is primarily image-focused, videos are typically much larger than images and consume considerably more storage.


Optimization over several releases

Since 2013 we’ve optimized our usage of storage by nearly 50%.  Our latest efforts helped us get through 2016 without purchasing any additional storage,  and we still have a few more options available.

Peter Norby, Teja Komma, Shijo Joy and Bei Wu formed the core team for our zero-storage-budget project. Many others assisted the effort.

Improved Graphs: Powered by the Open Source DO Agent

Published 3 Jan 2017 by DigitalOcean in DigitalOcean Blog.

At DigitalOcean, we want to make monitoring the services you've deployed simple and easy. As engineers, we know that having greater insight into the machines running in your fleet increases the speed at which you can troubleshoot issues.

That's why we're excited to launch new and improved memory and disk space graphs! We've gathered the knowledge that we've learned involving telemetry and performance observability and poured it into an open-source project called do-agent. This monitoring application helps you get a better picture of the health of your Droplets by adding several new graphs to the Droplet graphs page and making the existing graphs much more precise.

New graphs

To get these graphs, you'll need to have the new agent. On new Droplets, just click the Monitoring checkbox during Droplet creation.

Select monitoring

On existing Droplets, you can install the agent by running:

curl -sSL | sh

Or get all the details in this tutorial on the DigitalOcean community site.

How Does do-agent Work?

do-agent is a lightweight application which runs on Droplets and periodically collects system performance/state metrics. The collected metrics are immediately transmitted to the monitoring API endpoints and made available to you via the Droplet graphs page.

When we began thinking of do-agent, security was one of our top priorities; we wanted to take great care not to collect any data that may be considered private. How could we collect the metrics we felt were necessary with an agent that would require the minimum amount of resources and security privileges?

We chose to collect system information from the /proc pseudo filesystem, which contains everything from CPU metrics to Linux kernel versions. In true Unix fashion, /proc presents system information laid out as files on the filesystem; the hierarchy determines the information you are attempting to access. The greatest benefit we gain from using /proc is the ability to access this information as a very low-privileged user.

The /proc files are read and converted into metrics that are transmitted via gRPC to a metrics endpoint. The agent authenticates as belonging to your Droplet and tags all of your data with the Droplet ID.

What's Next?

This new agent opens up many possibilities for future tools that will provide insight into Droplet performance. We're not stopping here! Currently, we're working on a suite of tools which will enable engineers to collectively monitor groups of Droplets instead of individual Droplets.

do-agent also has a plugin architecture built in. We don't have any plugins written yet, but this architecture enables us to create them to observe more than just Droplet metrics; you could potentially collect other software performance metrics running on or alongside your software.

The Prometheus project was a great inspiration and model for this project (and is used in the agent itself), and the ability for you to install plugins to collect arbitrary metrics was inspired by the Munin open-source project. do-agent is itself open source, and we welcome contributions!

We're excited about the possibilities these graphs and this agent open up for us. If you are too, sign up to be the first to know as we begin to roll out new monitoring and alerting features early this year.

Impressions of Jerusalem and Tel Aviv

Published 3 Jan 2017 by Tom Wilson in tom m wilson.

Arriving in Israel… Coming over the border from Jordan it was forbidding and stern – as though I was passing through a highly militarised zone, which indeed I was. Machine gun towers, arid, blasted dune landscape, and endless security checks and waiting about. Then I was in the West Bank. The first thing I noticed […]


Published 29 Dec 2016 by Tom Wilson in tom m wilson.

I have been travelling West from Asia.  When I was in Colombo I photographed a golden statue of the Buddha facing the Greco-Roman heritage embodied in Colombo’s Town Hall.  And now I’ve finally reached a real example of the Roman Empire’s built heritage – the city of Jerash in Jordan.  Jerash is one of the […]

We Are Bedu

Published 26 Dec 2016 by Tom Wilson in tom m wilson.

While in Wadi Musa I had met our Bedu guide’s 92 year old mother. She was living in an apartment in the town. I asked her if she preferred life when she was a young woman and there was less access to Western conveniences, or if she preferred life in the town today. She told me […]

Montreal Castle

Published 26 Dec 2016 by Tom Wilson in tom m wilson.

  I’ve been at Montreal (known Arabic as Shawbak) Castle, a crusader castle south of Wadi Musa. Standing behind the battlements I had looked through a slit in the stone.  Some of this stone had been built by Christians from Western Europe around 1115 AD in order to take back the Holy Land from Muslims. Through […]


Published 26 Dec 2016 by Tom Wilson in tom m wilson.

Mountains entered. Size incalculable. Mystical weight and folds of stone. Still blue air. The first day in Petra we headed out to Little Petra, a few kms away from the more famous site, where a narrow canyon is filled with Nabatean caves, carved around 2000 years ago. On the way we took a dirt track […]

Ghost of blog posts past

Published 25 Dec 2016 by Bron Gondwana in FastMail Blog.

Last year I posted about potential future features in FastMail, and the magic outbox handling support that I had just added to Cyrus. In the spirit of copying last year, I'm doing a Dec 25th post again (with a bit more planning).

During this year's advent I've had more support than previous years, which is great! I didn't have to write as much. One day we might run out of things to say, but today is not that day.

Last year's post definitely shows the risks of making future predictions out loud, because for various reasons we spent a lot of time on other things this year, and didn't get the snooze/"delayed send"/"tell me if no reply" features done.

But the underlying concepts didn't go to waste. We're using magic replication of a hidden folder the "#jmap' for blob uploads now, and we're indexing every single body part by sha1 allowing us to quickly find any blob, from any message, anywhere in a user's entire mailstore.

One day, this could help us to efficiently de-duplicate big attachments and save disk space for users who get the message mailed backwards and forwards a lot.

And features that fall under the general category of "scheduled future actions on messages and conversations" are still very much on our roadmap.

Looking ahead

When we developed our values statement a couple of weeks ago, we spent a lot of time talking about our plans for the next few years, and indeed our plans for the next few months as well!

We also distilled a mission statement: FastMail is the world's best independent email service for humans (explicitly not transactional/analytics/marketing emails), providing a pleasant and easy-to-use interface on top of a rock solid backend. Our other product lines, Pobox (email for life) and Listbox (simple mass email) complement our offering, and next year you'll see another product that builds on the expertise of both teams.

Upgrading the remaining server-side generated screens into the main app is on the cards, and converting all our APIs to the JMAP datamodel. Once we're happy with APIs that we can support long term, we'll be publishing guides to allow third parties to build great things on top of our platform.

And of course we'll continue to react to the changing world that we live in, with a particular focus on making sure all our features work, and work well, on interfaces of all sizes. Our commitment to standards and interoperability is undiminished. We've joined M3AAWG and will be attending our first of their conferences next year, as well as continuing to contribute to CalConnect and getting involved with the IETF. Some of our staff are speaking at Linux Conf Australia in January, see us there!

New digs

We've spent a lot of the past couple of weeks looking at new office space. We're outgrowing our current offices, and since our lease expires next year, it's time to upgrade. We particularly need space because we'll be investing heavily in staffing next year, with a full time designer joining us here in Melbourne. We're also planning to keep improving our support coverage, and adding developers to allow us to have more parallel teams working on different things.

I totally plan to make sure I get the best seat in the house when office allocation comes around!

Technical debt

We moved a lot slower on some things than we had hoped in the past year. The underlying reason is the complexity that grows in a codebase that's been evolving over more than 15 years. Next year we will be taking stock, simplifying sections of that code and automating many of the things we're doing manually right now.

There's always a balance here, and my theory for automating tasks goes something like:

  1. do it once (or multiple times) to make sure you understand the problem
  2. do it a another time, tracking the exact steps that were taken and things that were checked to make sure it was working properly
  3. write the automation logic and run it by hand, watching each step carefully to make sure it's going what you want - as many times as necessary to be comfortable that it's all correct
  4. turn on automation and relax!

For my own part, the Calendar code is where I'm going to spend the bulk of my cleanup work, there are some really clunky bits in there. And I'm sure everyone else has their own area they are embarrassed by. Taking the time to do cleanup weeks where we have all promised not to work on any new features will help us in the long run, it's like a human sleeping and allowing the brain to reset.

What's exciting next year?

Me, I'm most excited about zeroskip, structured db and making Cyrus easier to manage, and I've asked a few other staff to tell me what excites them about 2017:

"Replacing our incoming SMTP and spam checking pipeline with a simpler and easier to extend system." — Rob M

"Can't wait to hang out at LCA (see you there?) where I'm doing my (first ever talk), and meet customers present (and future)! (all of the brackets)" — Nicola

"Making more tools and monitoring and other internal magic so everyone can get stuff done faster without worrying about breaking anything." — Rob N

"The continued exchange of ideas and software between FastMail and Pobox. I think that 2017 will be the year when a lot of our ongoing sharing will begin to bear fruit, and it's going to be fantastic" — Rik

"Focusing on Abuse and Deliverability — making sure your mail gets delivered, and keeping nasties out of your Inbox" — Marc

"Getting our new project in front of customers — it brings the best parts of Listbox's group email infrastructure together with Fastmail's interface expertise. It's going to be awesome!" — Helen

Now That’s What I Call Script-Assisted-Classified Pattern Recognized Music

Published 24 Dec 2016 by Jason Scott in ASCII by Jason Scott.

Merry Christmas; here is over 500 days (12,000 hours) of music on the Internet Archive.

Go choose something to listen to while reading the rest of this. I suggest either something chill or perhaps this truly unique and distinct ambient recording.


Let’s be clear. I didn’t upload this music, I certainly didn’t create it, and actually I personally didn’t classify it. Still, 500 Days of music is not to be ignored. I wanted to talk a little bit about how it all ended up being put together in the last 7 days.

One of the nice things about working for a company that stores web history is that I can use it to do archaeology against the company itself. Doing so, I find that the Internet Archive started soliciting “the people” to begin uploading items en masse around 2003. This is before YouTube, and before a lot of other services out there.

I spent some time tracking dates of uploads, and you can see various groups of people gathering interest in the Archive as a file destination in these early 00’s, but a relatively limited set all around.

Part of this is that it was a little bit of a non-intuitive effort to upload to the Archive; as people figured it all out, they started using it, but a lot of other people didn’t. Meanwhile, Youtube and other also-rans come into being and they picked up a lot of the “I just want to put stuff up” crowd.

By 2008, things start to take off for Internet Archive uploads. By 2010, things take off so much that 2008 looks like nothing. And now it’s dozens or hundreds of uploads of multi-media uploads a day through all the Archive’s open collections, not to count others who work with specific collections they’ve been given administration of.

In the case of the general uploads collection of audio, which I’m focusing on in this entry, the number of items is now at over two million.

This is not a sorted, curated, or really majorly analyzed collection, of course. It’s whatever the Internet thought should be somewhere. And what ideas they have!

Quality is variant. Finding things is variant, although the addition of new search facets and previews have made them better over the years.

I decided to do a little experiment: slight machine-assisted “find some stuff” sorting. Let it loose on 2 million items in the hopper, see what happens. The script was called Cratedigger.

Previously, I did an experiment against keywording on texts at the archive – the result was “bored intern” level, which was definitely better than nothing, and in some cases, that bored internet could slam through a 400 page book and determine a useful word cloud in less than a couple seconds. Many collections of items I uploaded have these word clouds now.

It’s a little different with music. I went about it this way with a single question:

Cratediggers is not an end-level collection – it’s a holding bay to do additional work, but it does show the vast majority of people would upload a sound file and almost nothing else. (I’ve not analyzed quality of description metadata in the no-image items – that’ll happen next.) The resulting ratio of items-in-uploads to items-for-cratediggers is pretty striking – less than 150,000 items out of the two million passed this rough sort.

The Bored Audio Intern worked pretty OK. By simply sending a few parameters, The Cratediggers Collection ended up building on itself by the thousands without me personally investing time. I could then focus on more specific secondary scripts that do things and an even more lazy manner, ensuring laziness all the way down.

The next script allowed me to point to an item in the cratediggers collection and say “put everything by this uploader that is in Cratediggers into this other collection”, with “this other collection” being spoken word, sermons, or music. In general, a person who uploaded music that got into Cratediggers generally uploaded other music. (Same with sermons and spoken word.) It worked well enough that as I ran these helper scripts, they did amazingly well. I didn’t have to do much beyond that.

As of this writing, the music collection contains over 400 solid days of Music. They are absolutely genre-busting, ranging from industrial and noise all the way through beautiful Jazz and acapella. There are one-of-a-kind Rock and acoustic albums, and simple field recordings of Live Events.

And, ah yes, the naming of this collection… Some time ago I took the miscellaneous texts and writings and put them into a collection called Folkscanomy.

After trying to come up with the same sort of name for sound, I discovered a very funny thing: you can’t really attached any two words involving sound together and not already have some company that has the name of Manufacturers using it. Trust me.

And that’s how we ended up with Folksoundomy.

What a word!

The main reason for this is I wanted something unique to call this collection of uploads that didn’t imply they were anything other than contributed materials to the Archive. It’s a made-up word, a zesty little portmanteau that is nowhere else on the Internet (yet). And it leaves you open for whatever is in them.

So, about the 500 days of music:

Absolutely, one could point to YouTube and the mass of material being uploaded there as being superior to any collection sitting on the archive. But the problem is that they have their own robot army, which is a tad more evil than my robotic bored interns; you have content scanners that have both false positives and strange decorations, you have ads being put on the front of things randomly, and you have a whole family of other small stabs and Jabs towards an enjoyable experience getting in your way every single time. Internet Archive does not log you, require a login, or demand other handfuls of your soul. So, for cases where people are uploading their own works and simply want them to be shared, I think the choice is superior.

This is all, like I said, an experiment – I’m sure the sorting has put some things in the wrong place, or we’re missing out on some real jewels that didn’t think to make a “cover” or icon to the files. But as a first swipe, I moved 80,000 items around in 3 days, and that’s more than any single person can normally do.

There’s a lot more work to do, but that music collection is absolutely filled with some beautiful things, as is the whole general Folksoundomy collection. Again, none of this is me, or some talent I have – this is the work of tens of thousands of people, contributing to the Archive to make it what it is, and while I think the Wayback Machine has the lion’s share of the Archive’s world image (and deserves it), there’s years of content and creation waiting to be discovered for anyone, or any robot, that takes a look.

My Top Ten Gigs (as a Punter) and Why

Published 24 Dec 2016 by Dave Robertson in Dave Robertson.

I had a dream. In the dream I had a manager. The manager told me I should write a “list” style post, because they were trending in popularity. She mumbled something about the human need for arbitrary structure amongst the chaos of existence. Anyway, these short anecdotes and associated music clips resulted. I think I really did attend these gigs though, and not just in a dream.

10. Dar Williams at Phoenix Concert Theatre, Toronto, Canada – 20 August 2003

You don’t need fancy instrumentation when you’re as charming, funny and smart as Dar Williams. One of her signature tunes, The Christians and the Pagans, seems appropriate to share this evening, given the plot takes place on Christmas Eve.

9. Paul Kelly at Sidetrack Cafe, Edmonton, Canada – 18 March 2004

The memorable thing about this gig was all the Aussies coming out of the woodwork of this icy Prairie oil town, whose thriving music underbelly was a welcome surprise to me. Incidentally, the Sidetrack Cafe is the main location of events in “For a Short Time” by fellow Aussie songwriter Mick Thomas. Tiddas did a sweet cover of this touching song:

8. Hussy Hicks at the Town Hall, Nannup – 5 March 2016

Julz Parker and Leesa Gentz have serious musical chops. Julz shreds on guitar and Leesa somehow manages not to shred her vocal chords despite belting like a beautiful banshee. Most importantly they have infectious fun on stage, and I could have picked any of the gigs I’ve been to, but I’ll go with the sweat anointed floor boards of one of their infamous Nannup Town Hall shows. This video is a good little primer on the duo.

7. The National at Belvoir Amphitheatre, Swan Valley – 14 February 2014

After this gig I couldn’t stop dancing in the paddock with friends and strangers amongst the car headlights. The National are a mighty fine indie rock band, fronted by the baritone voice of Matt Berninger. He is known for downing a bottle of wine on stage, and is open about it being a crutch to deal with nerves and get in the zone. This clip from Glastonbury is far from his best vocal delivery, but its hard to argue that its not exciting and the audience are certainly on his wavelength!

6. Kathleen Edwards at Perth Concert Hall balcony – 17 February 2006

I was introduced to Kathleen Edwards by a girlfriend who covered “Hockey Skates” and I didn’t hesitate to catch her first, and so far only, performance in Perth. The easy banter of this fiery red head, and self proclaimed potty mouth, included warning a boisterous women in the audience that her husband/guitarist, Colin Cripps, was not “on the market”. Change the Sheets is a particularly well produced song of Kathleen’s, engineered by Justin Vernon (aka Bon Iver):

5. The Cure at Perth Arena – 31 July 2016

One of the world’s most epic bands, they swing seamlessly from deliriously happy pop to gut-wrenching rock dirges, all with perfectly layered instrumentation. This was third Cure show and my favourite, partly because I was standing (my preferred way to experience any energetic music) and also great sound that meant I didn’t need my usual ear plugs. Arguably the best Cure years were 85 to 92 when they had Boris Williams on drums, but this was a fine display and at the end of the three hours I wanted them to keep playing for three more. “Lovesong” is my innocent karaoke secret:

4. Lucie Thorne & Hamish Stuart in my backyard – 26 Feburary 2014

I met Lucie Thorne at a basement bar called the Green Room in Vancouver in 2003. She is the master of the understatement, with a warm voice that glides out the side of her mouth, and evocative guitar work cooked just the right amount. Her current style is playing a Guild Starfire through a tremolo pedal into a valve amp, while being accompanied by the tasteful jazz drumming legend Hamish Stuart. Here’s a clip of the house concert in question:

3. Ryan Adams and the Cardinals at Metropolis, Fremantle – 25 January 2009

The first review I read of a Ryan Adam’s album said he could break hearts singing a shopping list, and he’s probably the artist I’ve listened to the most in the last decade. He steals ideas from the greats of folk, country, rock, metal, pop and alt-<insert genre>, but does it so well and so widely, and with such a genuine love and talent for music. I’m glad I caught The Cardinals in their prime and there was a sea of grins flowing out onto the street after the three hour show. This stripped back acoustic version of “Fix It” is one of my favourites:

2. Damien Rice at Civic Hotel – 9 October 2004

I feel Damien Rice’s albums, with the exception of “B-Sides”, are over-produced with too many strings garishly trying to tug your heart strings. Live and solo however, Damien is a rare force with no strings attached or required. I heard a veteran music producer say the only solo live performer he’s seen with a similar power over an audience was Jeff Buckley. I remember turning around once at the Civic Hotel gig and seeing about half the audience in tears, and I was well and truly welling up.

1. Portland Cello Project performing Radiohead’s Ok Computer at Aladin Theatre, Portland, Oregon – 22 September 2012

Well if crying is going to be a measure of how good a gig is then choosing my number one is easy. I cried all the way through the Portland Cello Project’s performance of Ok Computer and wrote a whole separate post about that.

Honourable mentions:

Joe Pug at Hardly Strictly Bluegrass, San Francisco – October 2012.

Yothu Yindi at Curtin University – 1996

Billy Bragg at Enmore Theatre, Sydney – 14 April 1999

Sally Dastey at Mojos – 2004

CR Avery at Marine Club in Vancouver – 28 November 2003

Jill Sobule at Vancouver Folk Festival – July 2003

Let the Cat Out in my lounge room – 2011

Martha Wainwright atFly By Night, Fremantle – 22 November 2008

The Mountain Goats at The Bakery, Perth –  1 May 2012… coming to town again in April – come!


SPF, DKIM & DMARC: email anti-spoofing technology history and future

Published 24 Dec 2016 by Rob Mueller in FastMail Blog.

This is the twenty fourth and final post in the 2016 FastMail Advent Calendar. Thanks for reading, and as always, thanks for using FastMail!

Quick, where did this email come from and who was it sent to?

From: PayPal <>
To: Rob Mueller <>
Subject: Receipt for your donation to Wikimedia Foundation, Inc.

Actually, these headers tell you nothing at all about where the email really came from or went to. There are two separate parts to the main email standards. RFC5322 (originally RFC822/RFC2822) specifies the format of email messages, including headers like from/to/subject and the body content. However, it doesnʼt specify how messages are transmitted between systems. RFC5321 (originally RFC821/RFC2821) describes the Simple Mail Transfer Protocol (SMTP) which details how messages are sent from one system to another.

The separation of these causes a quirk: the format of a message need not have any relation to the source or destination of a message. That is, the From/To/Cc headers you see in an email may not have any relation to the sender of the message or the actual recipients used during the SMTP sending stage!

When the email standards were developed, the internet was a small network of computers at various universities where people mostly knew each other. The standard was developed with the assumption that the users and other email senders could be trusted.

So, the From header would be a userʼs own email address. When you specified who you wanted to send the message to, those addresses would be put in the To header and used in the underlying SMTP protocol delivering the messages to those people (via the RCPT TO command in SMTP). This separation of email format and transport also allows features like Bcc (blind carbon copy) to work. Any addresses a message is bccʼd to donʼt appear in the message headers, but are used in the underlying SMTP transport to deliver the message to the right destination.

Over time of course, this assumption of a friendly environment became less and less true. We now live in a world where much of the internet is downright hostile. We need to heavily protect our systems from mountains of spam and malicious email, the much of it designed to trick people.

There are many layers of protection from spam, from RBLs to detect known spam sending servers, to content analysis that helps classify messages as spammy or not. In this, we want to talk the major anti-spoofing techniques that have been developed for email.


One of the earliest anti-spoofing efforts was SPF (Sender Policy Framework). The idea behind SPF was that senders could specify, via an SPF record published in DNS, what servers were allowed to send email for a particular domain. For example, only servers X, Y & Z are allowed to send email for addresses.

Unfortunately, SPF has many problems. For starters, it only works on the domain in the SMTP sending protocol, known as the MAIL FROM envelope address. No email software ever displays this address. (Its main use is where to send error/bounce emails if final delivery fails.) In a world where what the recipient can be anything, thereʼs no need for the MAIL FROM address to match the From header address in any way. So effectively, the only thing youʼre protecting against is the spoofing of an email address no one ever sees.

In theory, this does help to address one particular type of spam. It helps reduce backscatter email. Backscatter what you see when messages spammers sent pretending to be you can't be delivered.

In practice, it would only do that if people actually blocked email that failed SPF checks at SMTP time. They rarely do that because SPF has a major problem. It completely breaks traditional email forwarding. When a system forwards an email, itʼs supposed to preserve the MAIL FROM address so any final delivery failures go back to the original sender. Unfortunately, that means when someone sends from Hotmail to FastMail, and then you forward on from FastMail to Gmail, in the FastMail to Gmail hop, there's a mismatch. The MAIL FROM address will be an domain, but the SPF record will say that FastMail isnʼt allowed to send email with an domain address!

There was an attempt to fix this (SRS), but itʼs surprisingly complex. Given the relatively low value of protection SPF provides, not many places ended up implementing SRS. The situation we ended up with is that SPF is regarded as a small signal for email providers' use. If SPF passes, itʼs likely the email is legitimately from the domain in the MAIL FROM address. If it fails, well... thatʼs not really much information at all. It could be a forwarded email, it could be a misconfigured SPF record, or many other things. But stay tuned for its next life in DMARC.


DKIM (DomainKeys Identified Mail) is a significantly more complex and interesting standard compared to SPF. It allows a particular domain owner (again, via a record published in DNS) to cryptographically sign parts of a message so that a receiver can validate that they havenʼt been altered.

DKIM is a bit fiddly at the edges and took a while to get traction, but is now commonly used. Almost 80% of email delivered to FastMail is DKIM signed.

So letʼs take the message we started with at the top and add a DKIM signature to it.

DKIM-Signature: v=1; a=rsa-sha256;; s=pp-dkim1; c=relaxed/relaxed;
    q=dns/txt;; t=1480474251;
From: PayPal <>
To: Rob Mueller <>
Subject: Receipt for your donation to Wikimedia Foundation, Inc.

Using a combination of public key cryptography and DNS lookups, the receiver of this email can determine that the domain "" signed the body content of this email and a number of the email's headers (in this case, From, Subject, Date, To and a couple of others.) If it validates, we know the body content and specified headers have not been modified by anyone along the way.

While this is quite useful, there are still big questions that arenʼt answered.

What about emails with a From address of that arenʼt DKIM signed by Maybe not every department within PayPal has DKIM signing correctly set up. Should we treat unsigned emails as suspicious or not?

Also, how do I know if I should trust the domain that signs the email? In this case, is probably owned by the Australian division of PayPal Holdings, Inc, but what about Itʼs not obvious what domains I should or shouldnʼt trust. In this case, the From address matches the DKIM signing domain, but that doesnʼt need to be the case. You can DKIM sign with any domain you want. Thereʼs nothing stopping a scammer using an address in the From header, but signing with the domain.

Despite this, DKIM provides real value. It allows an email receiver to associate a domain (or multiple, since multiple DKIM signatures on an email are possible and in some cases useful) with each signed email. Over time, the receiver can build up a trust metric for that domain and/or associated IPs, From addresses, and other email features. This helps discriminate between "trusted" emails and "untrusted" emails.


DMARC (Domain-based Message Authentication, Reporting & Conformance) attempts to fix part of this final trust problem by building on DKIM and SPF. Again, by publishing a record in DNS, domain owners can specify what email receivers should do with email received from their domain. In the case of DMARC, we consider email to be from a particular domain by looking at the domain in the From header: -- the address you see when you receive a message..

In its basic form, when you publish a DMARC record for your domain receivers should:

  1. Check the From header domain matches the DKIM signing domain (this is called alignment), and that the DKIM signature is valid.

  2. Check the the From header domain matches the SMTP MAIL FROM domain, and that the senderʼs IP address is validated by SPF.

If either is true, the email "passes" DMARC. If both fail, the DMARC DNS record specifies what the receiver should do, which can include quarantining the email (sending it to your spam folder) or rejecting the email. Additionally, the DMARC record can specify an email address to send failure reports to. DMARC also allows senders to specify which percentage of their mail to apply DMARC to, so they can make changes in a gradual and controlled way.

So back to our example email:

DKIM-Signature: v=1; a=rsa-sha256;; s=pp-dkim1; c=relaxed/relaxed;
    q=dns/txt;; t=1480474251;
From: PayPal <>
To: Rob Mueller <>
Subject: Receipt for your donation to Wikimedia Foundation, Inc.

In this case, the From header domain is Letʼs check if they publish a DMARC policy.

$ dig +short TXT
"v=DMARC1; p=reject;;,"

Yes, they do. letʼs run our checks! Does the From domain match the DKIM signing domain Yes, so we have alignment. If the email wasnʼt DKIM signed, or if it were DKIM signed but the domain had been (e.g. signed by a scammer), then there wouldnʼt have been alignment, and so DMARC would have failed. At that point, we would have consulted the DMARC policy, which specifies p=reject, which says that we should just reject the forged email.

In this case (I havenʼt included the entire DKIM signature, but I can tell you it validated), the email did pass DMARC. So we can accept it. Because of alignment, we know the domain in the From address also matches the DKIM signing domain. This allows users to be sure that when they see a From: address, they know that itʼs a real message from, not a forged one!

This is why DMARC is considered an anti-phishing feature. It finally means that the domain in the From address of an email canʼt be forged (at least for domains that DKIM sign their emails and publish a DMARC policy). All that, just to ensure the domain in the From address canʼt be forged, in some cases.

Unfortunately, as is often the case, this feature also brings some problems.

DMARC allows you to use SPF or DKIM to verify a message. If you donʼt DKIM sign a message and rely only on SPF, when a message is forwarded from one provider to another, DMARC will fail. If you have a p=reject policy setup, the forwarding will fail. Unlike in SPF where failure is a "weak signal", a DMARC policy is supposed to tell receivers more strictly what to do, making bounces a strong possibility.

The solution: always make sure you DKIM sign mail if you have a DMARC policy. If your email is forwarded, SPF will break, but DKIM signatures should survive. SRS wonʼt help with DMARC, because replacing the MAIL FROM envelope with your own domain means the MAIL FROM domain doesnʼt match the From header domain. This is an alignment failure, and so not a pass result for DMARC.

I say "should survive", because, again, not all providers are great at that. In theory, forwarding systems preserve the complete structure of your message. Unfortunately, thatʼs not always the case. Even large providers have problems with forwarding inadvertently slightly altering the content/structure of an email (Exchange based systems (including and iCloud are notorious for this). Even a slight modification can and will break DKIM signatures. Again, combined with a DMARC p=reject policy, this can result in email being rejected.

The solutions in this case are to:

  1. Bug those providers to fix their email forwarding and not to modify the email in transit. DKIM is now a well established standard; providers should ensure their forwarding doesnʼt break DKIM signatures.

  2. Switch to using POP to pull email from your remote provider. We donʼt do SPF/DKIM/DMARC checking on emails pulled from a remote mailbox via POP.

  3. Donʼt forward this mail. Wherever the emails are coming from, change your email address at that service provider so it points directly to your FastMail email address and avoids forwarding altogether.

Thereʼs one other case thatʼs a known big issue with DMARC: mailing lists. Mailing lists can be considered a special case of email forwarding: you send to one address, and itʼs forwarded to many other addresses (the mailing list members). However, itʼs traditional for mailing lists to modify the emails during forwarding, by adding unsubscribe links or standard signatures to the bottom of every message and/or adding a [list-id] tag to the message subject.

DKIM signing subjects and message bodies is very common. Changing them breaks the DKIM signature. So, if the sender's domain has a p=reject DMARC policy, then when the mailing list software attempts to forward the message to all the mailing list members, the receiving systems will see a broken DKIM signature and thus reject the email. (This was actually a significant problem when Yahoo and AOL both enabled p=reject on their user webmail service domains a few years ago!)

Fortunately, thereʼs a relatively straight forward solution to this. Mailing list software can rewrite the From address to one the mailing list controls, and re-sign the message with DKIM for that domain. This and a couple of other solutions are explained on the DMARC information website. These days, the majority of mailing list software systems have implemented one of these changes, and those that havenʼt will very likely have to when Gmail enables p=reject on sometime early next year. Not being able to forward emails from the worldʼs largest email provider will definitely hamper your mailing list.

These authentication systems affect FastMail in two ways. What we do for email received from other senders, and what we do when sending email.

SPF, DKIM & DMARC for email received at FastMail

Currently, FastMail does SPF, DKIM and DMARC checking on all incoming email received over SMTP (but not email retrieved from remote POP servers).

Passing or failing SPF and/or DKIM validation only adjusts a message's spam score. We donʼt want to discriminate against a failing DKIM signature for an important domain, and we donʼt want to whitelist a spammy domain with a valid DKIM signature. A DKIM signature is treated as context information for an email, not a strong whitelist/blacklist signal on its own.

For DMARC, the domain owners are making a strong statement about what they want done with email from their domains. For domains with a p=quarantine policy, we give failing emails a high spam score to ensure they go to the userʼs Spam folder. For domains with a p=reject policy, we donʼt currently reject at SMTP time but effectively still do a quarantine action with an even higher score. We hope to change this in the future after adding some particular exceptions known to cause problems.

We add a standard Authentication-Results header to all received emails explaining the results of SPF, DKIM and DMARC policies applied. Surprisingly, existing software to do this was not well maintained or buggy, so we ended up writing an open source solution we hope others will use.

Back to our example again. Hereʼs that PayPal email with the corresponding Authentication-Results header.

    dkim=pass (2048-bit rsa key) header.b=PVkLotf/;
DKIM-Signature: v=1; a=rsa-sha256;; s=pp-dkim1; c=relaxed/relaxed;
    q=dns/txt;; t=1480474251;
From: PayPal <>
To: Rob Mueller <>
Subject: Receipt for your donation to Wikimedia Foundation, Inc.

You can see SPF, DKIM, and DMARC all passed.

The information in this header is used by other parts of the FastMail system. For instance, if youʼve added to your address book to whitelist it, weʼll ignore the whitelisting if DMARC validation fails. This ensures that a scammer canʼt create an email with a forged From address of and get it into your Inbox because youʼve whitelisted that From address.

SPF, DKIM & DMARC for FastMail and user domains

All FastMail domains currently have a relaxed SPF policy (by design because of legacy systems, see DMARC below) and we DKIM sign all sent email. We actually sign with two domains, the domain in the From header, as well as our domain. This is to do with some Feedback Loops, which use the DKIM signing domain to determine the source of the message.

For user domains, weʼll also publish a relaxed SPF policy and a DKIM record if you use us to host the DNS for your domain. If you use another DNS provider, you need to make sure you copy and publish the correct DKIM record at your DNS provider. Once we detect itʼs setup, weʼll start DKIM signing email you send through us.

Currently, FastMail doesnʼt have a DMARC policy for any of our domains, and we donʼt publish a default policy for user domains either. This means that users can send emails with From addresses from anywhere. This is a bit of a legacy situation. When FastMail started more than 16 years ago, none of these standards existed. It was common for people to set up all sorts of convoluted ways of sending email with the assumption they could send with any From address they wanted. (Old internet connected fax/scanner machines are a particularly notorious example of this.)

Over time, this is becoming less and less true, and more and more people are expecting that emails will be DKIM signed and/or have valid SPF and/or have a DMARC policy set for the domain. Itʼs likely sometime in the future weʼll also enable a p=reject policy for our domains. To send with an From address, youʼll have to send through our servers. This is perfectly possible with authenticated SMTP, something basically everything supports these days.

Ongoing problems

Even though DMARC allows us to verify that the domain in the From header actually sent and authenticated the email and its contents, a great anti-phishing feature, itʼs still a long way from stopping phishing. As we personally experienced, people donʼt check their emails with a skeptical eye. We regularly saw phishing emails sent to FastMail users like:

From: No Reply <>
Subject: Urgent! Your account is going to be closed!

Click [here]( right now or your account will be closed

Enough people clicked on it, and filled in the login form on a bogus website (that didnʼt even look much like FastMail), that weʼd see multiple stolen accounts daily. Unfortunately, trying to educate users just doesnʼt seem to work.

One of the main advantages of email is that itʼs a truly open messaging system. Anyone in the world can set up an email system and communicate with any other email system in the world. Itʼs not a walled garden controlled by a single company or website. This openness is also its biggest weakness, since it means legitimate senders and spammers/scammers are on an equal footing. This means that email will continue its evolutionary arms race between spammers/scammers and receivers into the future, trying to determine if each email is legitimate using more and more factors. Unfortunately this means there will always be false positives (emails marked as spam that shouldnʼt be) and false negatives (spam/scam emails that make it through to a persons inbox). Thereʼs never going to be a "perfect" email world, regardless of what systems are put in place, but we can keep trying to get better and better.

Email authentication in the future

Although the main problem of mailing lists' incompatibility with DMARC p=reject policies has mostly been solved, it creates another problem in that receivers have to use the trust of the mailing list provider domain. This provides an incentive for spammers to target mailing lists, hoping for laxer spam checking controls that will forward the email to final receiving systems that will trust the mailing list provider. An emerging standard called ARC attempts to let receivers peer back to previous receiving servers in a trusted way so they can see authentication results and associated domains from earlier stages of a multi-step delivery path.

One thing we would like to see is some way to associate a domain with some real world. One way would be to piggy back on the SSL Extended Validation (EV) Certificate system. Obtaining an EV certificate requires proof of the requesting entity's legal identity. You see this in web browsers when you navigate to sites that use an EV certificate. For instance our site uses an EV certificate ( and browsers will show "FastMail Pty Ltd" in the address bar. Being able to display a clear "PayPal, Inc" next to emails legitimately from PayPal or any other financial institution would seem to be a significant win for users (modulo the slightly sad results we already found regarding users falling for phishing emails).

Unfortunately, there's no standard for this now and nothing on the horizon, and it's not entirely obvious how to do this without support from the senders. A naive approach that doesn't require sender changes would be to extract the domain from a From header address and attempt to make an https:// connection to it. But there's all sorts of edge cases. For instance, PayPal uses country specific domains for DKIM signing (e.g., but if you go to in a web browser, it redirects to You can't just follow any redirect, because a scammer could setup and redirect to Working out what redirects should actually be followed is entirely non-trivial.


This post has turned out significantly longer than I originally anticipated, but it shows just how complex a subject email authentication is in a modern context. In many cases, FastMail tries hard to make these things "just work", both as a receiver from other systems, and if you're a customer as a sender. If you host DNS for your domain with us, we setup an SPF and DKIM signing records automatically for you. We don't currently setup a DMARC record (there are still too many different ways people send email), but we hope in the future to allow easier construction and rollout of DMARC records for user domains.

PGP tools with FastMail

Published 23 Dec 2016 by Nicola Nye in FastMail Blog.

This is the twenty third and penultimate post in the 2016 FastMail Advent Calendar. Stay tuned for the final post tomorrow.

Earlier in our advent series, we looked at why FastMail doesn't have PGP support, and we mentioned that the best way to use PGP was with a command line or IMAP client.

So, as promised, a quick guide to (some of) the open source options PGP clients available for use with your FastMail account! We definitely recommend that you use Open Source encryption software, and preferably reproducible builds.

Not sure how encryption like PGP works? This is a basic overview of encryption which leads into an understanding of PGP to encrypt email. If you plan on taking your privacy seriously, we recommend further reading to understand the risks, rewards and usability issues associated with using encryption software.

While we have done some basic research on these options, we can't provide any guarantees as to their suitability for your particular situation.

Browser plugins



Native clients



iOS (iPhone/iPad)

There are no open source applications available for iOS, but these apps are available (and claim to be built on open source implementations of Open PGP) if you are looking for options.




Install a mail client compatible with a set of plugins to enable PGP.




Command Line

We recommend GNU Privacy Guard on the command line. It is available as source code and binaries for many platforms.

Chris our intrepid tester, has set up gpg on his work laptop to allow him to securely transfer data to his home machine, without ever bringing a copy of the private key to work. He's given the following set of steps which include easy-to-use aliases:

Generate a key with: gpg --gen-key (it asks some questions)

Export the public key with: gpg --armor --output ~/ --export It will look like this:

Version: GnuPG v1


On another machine, eg your work machine (somewhere you want to send messages/files from), import the key with: gpg --import-key ~/ Then add the following to your ~/.bashrc file:

alias ToChris='gpg --encrypt --recipient="" --armour'
alias scramble="gpg --output encrypted.gpg --encrypt --recipient "

As the work machine does not have your password or private key, you can then create encrypted messages/files from the command line:

echo "This is a private message. Remember to feed Ninja" | ToChris
Version: GnuPG v1


You can then send the GPG message without others being able to read it (e.g. by copying and pasting that text directly into the FastMail web interface as the body of an email)

The command: scramble <filename> will create encrypted.gpg which can be attached to an email.

Key parties

There's are plenty of good resources for how to prepare for a key signing party online. Parties are often associated with conferences, allowing you to build a web of trust with other people in your field. Just make sure you know which kind of key party you're attending.

Example GPG bootstrapping

Bron shows the full process of creating a brand new key to replace his expired key, and signing a document with it.

brong@wot:~$ gpg --list-keys
pub   rsa2048 2015-09-20 [SC] [expired: 2016-09-19]
uid           [ expired] Bron Gondwana <>

Shows how long since I've last needed to sign something!

brong@wot:~$ gpg --gen-key
gpg (GnuPG) 2.1.15; Copyright (C) 2016 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

gpg: starting migration from earlier GnuPG versions
gpg: porting secret keys from '/home/brong/.gnupg/secring.gpg' to gpg-agent
gpg: key 410D67927CA469F8: secret key imported
gpg: migration succeeded
Note: Use "gpg --full-gen-key" for a full featured key generation dialog.

GnuPG needs to construct a user ID to identify your key.

Real name: Bron Gondwana
Email address:
You selected this USER-ID:
    "Bron Gondwana <>"

Change (N)ame, (E)mail, or (O)kay/(Q)uit? O
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

At this point it popped up a dialog asking me to choose a passphrase.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key D92B20BCF922A993 marked as ultimately trusted
gpg: directory '/home/brong/.gnupg/openpgp-revocs.d' created
gpg: revocation certificate stored as '/home/brong/.gnupg/openpgp-revocs.d/8D8DEE2A5F30EF2E617BB2BBD92B20BCF922A993.rev'
public and secret key created and signed.

pub   rsa2048 2016-12-22 [SC]
uid                      Bron Gondwana <>
sub   rsa2048 2016-12-22 [E]


Now I have a new key. Let's pop that on the keyservers:

brong@wot:~$ gpg --send-keys 8D8DEE2A5F30EF2E617BB2BBD92B20BCF922A993
gpg: sending key D92B20BCF922A993 to hkp://
brong@wot:~$ echo "So you can all encrypt things to me now, and verify my signature (assuming you trust a fingerprint from a blog)" | gpg --clearsign
Hash: SHA256

So you can all encrypt things to me now, and verify my signature (assuming you trust a fingerprint from a blog)


And you can tell that I wrote this and none of my colleagues can edit that text and put words in my mouth (unless they create a different key with my email address and falsify the key generation part of the blog post as well!)

The command line is the most secure way to use PGP, where your email software and your encryption software running as entirely separate processes, only ciphertext or signed cleartext is transferred into the emails which are sent out from your secure computer.


Published 23 Dec 2016 by fabpot in Tags from Twig.

texvc back in Debian

Published 23 Dec 2016 by legoktm in The Lego Mirror.

Today texvc was re-accepted for inclusion into Debian. texvc is a TeX validator and converter than can be used with the Math extension to generate PNGs of math equations. It had been removed from Jessie when MediaWiki itself was removed. However, a texvc package is still useful for those who aren't using the MediaWiki Debian package, since it requires OCaml to build from source, which can be pretty difficult.

Pending no other issues, texvc will be included in Debian Stretch. I am also working on having it included in jessie-backports for users still on Jessie.

And as always, thanks to Moritz for reviewing and sponsoring the package!

MediaWiki not creating a log file and cannot access the database

Published 22 Dec 2016 by sealonging314 in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I'm trying to set up MediaWiki on an Apache2 server. Currently, when I navigate to the directory where the wiki is stored in my web browser, I see the contents of LocalSettings.php dumped on the screen, as well as this error message:

Sorry! This site is experiencing technical difficulties.

Try waiting a few minutes and reloading.

(Cannot access the database)

I have double-checked the database name, username, and password in LocalSettings.php, and I am able to log in using these credentials on the web server. I am using a mysql database.

I have been trying to set up a debug log so that I can see a more detailed error message. Here's what I've added to my LocalSettings.php:

$wgDebugLogFile = "/var/log/mediawiki/debug-{$wgDBname}.log";

The directory /var/log/mediawiki has 777 permissions, but no log file is even created. I've tried restarting the Apache server, which doesn't help.

Why is MediaWiki not creating a debug log? Are there other logs that I should be looking at for more detailed error messages? What could the reason be for the error message that I'm getting?

Cyrus development and release plans

Published 22 Dec 2016 by Bron Gondwana in FastMail Blog.

This is the twenty second post in the 2016 Fastmail Advent Calendar. Stay tuned for another post tomorrow.

Cyrus IMAPd development

As we mentioned earlier in this series FastMail is a major contributor to the Cyrus IMAPd project. As the current project lead, it falls to me to write about where we're at, and where we're going.

Since last year's post about the Cyrus Foundation Cyrus development has both slowed and sped up, depending what you're looking at. We haven't advanced the Object Storage work because nobody was sponsoring it any more. Ken from CMU makes it to our weekly meeting, but his availability to work on the open source code depends on how busy he is with other responsibilities.

So for now at least, Cyrus is mostly a FastMail show, and obviously anything that FastMail needs for our own production system takes priority for our staff, and that's where our development resources go.

Still, there's been a ton of work. Looking at commits, well over 10% of the total changes ever happened this year:

brong@bat:~/src/cyrus-imapd$ git log --oneline --since 2016-01-01 | wc -l
brong@bat:~/src/cyrus-imapd$ git log --oneline | wc -l

Looking at the code changes, there's a ton of churn too:

brong@bat:~/src/cyrus-imapd$ git diff 4cc2d21 | diffstat | tail -1
 876 files changed, 107374 insertions(+), 97808 deletions(-)

Which includes some really exciting things like redesiging the entire mbname_t structure to allow converting names between internal and external names really reliably and manipulation of mailboxes without any dotchar or hierarchy separator issues, which removes the cause of a ton of bugs with different configurations in the past.

In terms of new features, there is a full backup/restore system built on top of the replication protocol. There's a fairly complete JMAP implementation. There's much better fuzzy search support, built on the Xapian engine.

A large focus of our development this year has been making things simpler and more robust with APIs that hide complexity and manage memory more neatly, and this will continue with a lot more work on the message_t type next year. So there's been plenty of improvement, not all of it visible in the headline feature department.

And it's not just code. We've moved all our issue tracking to github and Nicola unified our documentation into the source code repositories, making it easier to contribute pull requests for docs.

Test all the things

As Chris mentioned in his post about our test infrastructure we've been increasing our test coverage and making sure that tests pass reliably. I'm particularly proud of the integration of ImapTest into our Cassandane test suite, and the fact that we now pass 100% of the tests (once I fixed a couple of bugs in ImapTest! The RFCs are unclear enough that Timo got it wrong, and he's really reliable.) I also added support for CalDAVTester into Cassandane at CalConnect in Hong Kong this year.

Robert has added a ton of tests for all his JMAP, charset and Xapian work.

Our test coverage is still pretty poor by modern development standards, but for a >20 year old project, it's not too shabby, and I'm really glad for Greg's work back when he was at FastMail, and for the ongoing efforts of all the team to keep our tests up to date. It makes our software much better.

In particular, it makes me a lot more comfortable releasing new Cyrus updates to FastMail's users, because for any bug report, the first thing I do now is add a test to Cassandane, so our coverage improves over time.

Going down the FastMail path

To build the 2.5 release, I sat in a pub in Pittsburgh with a giant printout of the 1000+ commits on the FastMail branch and selected which commits should go upstream and which were not really ready yet. The result was a piece of software which was not exactly what anyone had been running, and it kind of shows in some of the issues that have come out with 2.5. The DAV support was still experimental, and most of the new code had never been used outside FastMail.

After releasing 2.5, we looked at what was left of the FastMail specific stuff, and decided that the best bet was to just import it all into the upstream release, then revert the few things that were really single-provider specific and re-apply them as the fastmail branch. To this day, we have only between 10 and 50 small changes away from master in FastMail production on a day-to-day basis, meaning that everything we offer as the open source version has had real world usage.

So this means that things like conversations, Xapian FUZZY search (requires a custom patched version of Xapian for now, though we're working on upstreaming our patches), JMAP (experimental support) and the backup protocol are all in the 3.0 betas. Plenty of that is non-standard IMAP, though we match standard IMAP where possible.

Version 3.0

There is both less and more than we expected in what will become version 3.0. The main reason for a new major version is that some defaults have changed. altnamespace and unixhierarchysep are now default on, to match the behaviour of most other IMAP servers in the world. We've also got a brand new unicode subsystem based on ICU, a close to finished JMAP implementation, Xapian fuzzy search, the new Backup system and of course a rock solid CalDAV/CardDAV server thanks to Ken's excellent work.

Ellie released Cyrus IMAPd 3.0beta6 yesterday, and our plan is to do another beta at the start of January, then a release candidate on January 13th and a full release in February, absent showstoppers.

Plans for next year

Once 3.0 is out, we'll be continuing to develop JMAP, supporting 2.5 and 3.0, and doing more tidying up.

As I mentioned in the Twoskip post, there are too many different database formats internally, and the locking and rollback on error across multiple databases is a mess. I plan to change everything to just one database per user plus one per host, plus spool and cache files. The database engine will have to be totally robust. I'm working on a new design called zeroskip which is going to be amazing, as soon as I have time for it.

I also plan to add back-pointers to twoskip (it requires changes to two record types and a version bump) which will allow MVCC-style lockless reads, even after a crash, and mean less contention for locks in everything. It's all very exciting.

We're heavily involved in standards, with JMAP in the process of being chartered with the IETF for standards track, and our work through CalConnect on a JSON API for calendar events. Cyrus will continue to implement standards and test suites.

The core team is Ken at CMU, Robert S consulting along with the FastMail staff: myself, Ellie, Nicola and Chris. Of these people, Ellie and Robert are focused entirely on Cyrus, and the rest of us share our duties. It's been fantastic having those two who can single-mindedly focus on the project.

There's plenty of space for more contributors in our team! Join us on Freenode #cyrus IRC or on the mailing lists and help us decide the direction of Cyrus for the future. The roadmap is largely driven by what FastMail wants because we're paying for the bulk of the work that's being done, but we're also willing to invest our time in the community, supporting other users and building a well-rounded product, we just have to know what you need!

What we talk about when we talk about push

Published 21 Dec 2016 by Rob N ★ in FastMail Blog.

This is the twenty-first post in the 2016 FastMail Advent Calendar. Stay tuned for another post tomorrow.

Something people ask us fairly often when considering signing up with us is "do you support push?". The simple answer is "yes", but there's some confusion around what people mean when they ask that question, which makes the answer a bit complicated.

When talking about email, most people usually understand "push" to mean that they get near-realtime notifications on a mobile device when new mail arrives. While this seems like a fairly simple concept, making it work depends on some careful coordination between the mail service (eg FastMail), the mail client/app (iOS Mail, the FastMail apps or desktop clients like Thunderbird) and, depending on the mechanism used, the device operating system (eg iOS or Android) and services provided by the OS provider (Apple, Google). Without all these pieces coordinating, realtime notification of new messages doesn't work.

All this means there isn't an easy answer to the question "do you support push?" that works for all cases. We usually say "yes" because for the majority of our customers that is the correct answer.

There are various mechanisms that a mail client can use to inform the user that new mail has arrived, each with pros and cons.


Pure IMAP clients (desktop and mobile) have traditionally had a few mechanisms available to them to do instant notifications.


By far the simplest way for a client to see if there's new mail is to just ask the server repeatedly. If it checks for new mail every minute it can come pretty close to the appearance of real-time notification.

The main downsides to this approach are network data usage and (except for "always-on" devices like desktop computers) battery life.

Network usage can be a problem if it takes a lot of work to ask the server for changes. In the worst case, you have to ask for the entire state of the mailbox on the server and compare it to a record on the device of what was there the last time it checked. Modern IMAP has mechanisms (such as CONDSTORE and QRESYNC) that allow a client to get a token from the server that encodes the current server mailbox state at that time. Next time the client checks, it can present that token to say "give me all the changes that happened since I was last here". If both the client and server support this, it makes the network usage almost nothing in the common case where there's no change.

Battery life can become a problem in that the system has to wake up regularly and hit the network to see if anything happened. This is wasteful because on most of these checks you won't have received any mail, so the device ends up waking up and going back to sleep for no real reason.


To avoid the need to poll constantly, IMAP has a mechanism called IDLE. A client can open a folder on the server, and then "idle" on it. This holds the connection to the server open but lets the client device go to sleep. When something happens on the server, it sends sends a message to the client on that connection, which wakes the device so that it can then ask what changed.

For arbitrary IMAP clients that do no have specific support for device push channels or other mechanisms, this is usually what's happening. IDLE works, but has a couple of issues that make it less than ideal.

The main one is that IDLE only allows the client to find out about changes to a single folder. If the client wants to be notified about changes on multiple folders, it must make multiple IMAP connections, one for each folder. This makes clients more complex and may run into problems if there are many connections as some servers limit the number of simultaneous connections for a user.

The other issue, particularly on mobile devices, is that IDLE operates over TCP. This can cause problems when devices change networks (which may include moving between mobile cells), which may break the connection. Because of the way TCP operates, its not always possible for a client to detect that the connection is no longer working, which means the client has to resort to regular "pings" (typically requiring a regular wakeup) or relying on the device to tell it when the network has changed.

IDLE is good for many cases, and implemented by almost every IMAP client out there, but it's definitely the most basic option.


To deal with the one-folder-per-connection problem, IMAP introduced another mechanism called NOTIFY. This allows a client to request a complex set of changes its interested in (including a list of folders and a list of change types, like "new message" or "message deleted) and be informed of them all in one go.

This is a step in the right direction, but still has the same problem in that it operates over TCP. It's also a rather complicated protocol and hard to implement correctly, which I expect is why almost no clients or servers support it. Cyrus (the server that powers FastMail) does not implement it and probably never will.

Device push services

Most (perhaps all) the mobile device and OS vendors provide a push service for their OS. This isn't limited to iOS and Android - Windows, Blackberry and even Ubuntu and the now-defunct Firefox OS all have push services.

Conceptually these all work the same way. An app asks the device OS for some sort of push token, which is a combination device & app identifier. The app sends this token to some internet service that wants to push things to it (eg FastMail). When the service has something to say, it sends a message to the push service along with token. The push service holds that message in a queue until the device contacts it and requests the messages, then it passes them along. The device OS then uses the app identifier in the token to wake up the appropriate app and pass the message to it. The app can then take the appropriate action.

Deep down, the device OS will usually implement this by asking the push service to give it any new messages. There's usually some sort of polling involved but it can also be triggered by signalling from the network layer, such as a network change. It's not substantially different to an app polling regularly, but the OS can be much more efficient because it has a complete picture of the apps that are running and their requirements as well as access to network and other hardware that might assist with this task.

FastMail's Android app

Notifications in our Android app work exactly along these lines. At startup, the app registers a push token with FastMail. When something changes in the user's mailbox, we send a message with the push token to Google's Cloud Messaging push service (or, for non-Google devices, Pushy or Amazon's Device Messaging services) to signal the change. This eventually (typically within a couple of seconds) causes the app to be woken up by the OS, and it calls back to the server to get the latest changes and display them in the notification shade.

The one downside of this mechanism is that its possible for the message from the push service to be missed. Google's push service is explicitly designed to not guarantee deliery, and will quite aggressively drop messages that can't be delivered to the device in a timely fashion (usually a couple of minutes). This can happen when the device is off-network at the time or even just turned off. For the reason, the app also asks the OS to wake it on network-change and power events, which also cause it to ask our servers for any mailbox changes. In this way, it appears that nothing gets missed.

FastMail's iOS app

The FastMail iOS app works a little differently. One interesting feature of the iOS push system is that it's possible to include a message, icon, sound, "badge" (the count of unread messages on the app icon) and action in the push message, which the OS will then display in the notification shade. In this way the app never gets woken at all. The OS itself displays the notification and invokes the action when the notification is tapped. In the case of our app, the action is to start the app proper and then open the message in question (we encode a message ID into the action we send).

This is somewhat inflexible, as we can only send the kinds of data that Apple define in their push format, and there's arguably a privacy concern in that we're sending fragments of mail contents through a third-party service (though you already have to trust Apple if you're using their hardware so it's perhaps not a concern). The main advantage is that you get to keep your battery because the app never gets woken and never hits the network to ask for changes. It's hard to get more efficient than doing nothing at all!

Since iOS 8 its been possible to have a push message wake an app for processing, just like Android and every other platform. A future release of our iOS app will take advantage of this to bring it into line.

iOS Mail

The Mail app that ships on iOS devices is probably one of the better IMAP clients out there. Apple however chose not to implement IDLE, probably because of the battery life problems. Instead they do regular polling, but the minimum poll interval is 15 minutes. This works well and keeps battery usage in check, but is not quite the timely response that most people are after. When used in conjunction with their iCloud service however, iOS Mail can do instant notifications, and its this that most people think of as push.

It works pretty much exactly like FastMail's Android app. Upon seeing that the IMAP server offers support for Apple's push mechanism, the app sends the server a list of folders that its interested in knowing about changes for, and a push token. Just as described above, when something changes the IMAP server sends a message through Apple's push service, which causes the Mail app to wake and make IMAP requests to get the changes.

The nice thing about this for an IMAP client is that it doesn't need to hold the TCP connection open. Even if it drops, as it might if there's been no new mail for hours, it can just reconnect and ask for the changes.

Of course, this mechanism is limited to the iOS Mail app with servers that support this extension. Last year, Apple were kind enough to give us everything we need to implement this feature for FastMail, and it's fast become one of our most popular features.

Exchange ActiveSync

One of the first systems to support "push mail" as its commonly understood was Microsoft's Exchange ActiveSync, so it rates a mention. Originally used on Windows Mobile as early as 2004 to synchronise with Exchange servers, it's still seen often enough, particularly on Android devices (which support it out-of-the-box). There's a lot that we could say about ActiveSync, but as a push technology there's nothing particularly unusual about it.

The main difference between it and everything else is that it doesn't have a vendor-provided push service. Ultimately, the ActiveSync "service" on the device has to regularly poll any configured Exchange servers to find out about new mail and signal this to any interested applications. While not as efficient as having the OS do it directly, it can come pretty close particularly on Windows and Android which allows long-lived background services.

Calendars and contacts

In October we added support for push for calendars and contacts on iOS and macOS as well. In terms of push, they work on exactly the same concept as IMAP - the app requests notifications for a list of calendars and addressbooks and presents a push token. The server uses that token and informs the push service, which passes the message through. The OS wakes the app and it goes back to the server and asks for updates. There are some structural differences in the this is implemented for CalDAV/CardDAV vs IMAP, but mostly it uses the same code and data model as the rest.

The future

Sadly, the state of "push" for mail is rather fragmented at the moment. Anything can implement IMAP IDLE and get something approximating push, but it's difficult (but not impossible) to make a really nice experience. To do push in a really good (and battery-friendly) way, you're tied to vendor-specific push channels.

We're currently experimenting with a few things that may or may not help to change this:

Time will tell if these experiments will go anywhere. These are the kind of things that require lots of different clients and servers to play with and see what works and what doesn't. That's not something we can do by ourselves, but if you're a mail client author and you'd like to be able to do better push than what IMAP IDLE can give you, you should talk to us!

DNSSEC & DANE: no traction yet

Published 20 Dec 2016 by Rob Mueller in FastMail Blog.

This is the twentieth post in the 2016 FastMail Advent Calendar. Stay tuned for another post tomorrow.

Back in our 2014 advent series we talked about our DNS hosting infrastructure and our desire to support DNSSEC and DANE at some point in the future. It's been two years since then, and we still don't support either of them. What gives?

At this point we don't have any particular timeline for supporting DNSSEC or DANE. To be clear, these two features are fairly interconnected for us; the main reason for supporting DNSSEC would be to support DANE. DANE provides a way for a domain to specify that it requires an encrypted connection and the SSL/TLS certificate that should be presented, rather than just accepting an opportunistically-encrypted one. This avoids a MITM downgrade attack and/or interception attack. Currently no email servers (that we're aware of) verify that the domain of a certificate matches the server name they connected to or that the certificate is issued by a known CA (Certificate Authority). This means that currently server-to-server email can be opportunistically-encrypted and thus can't be read by any intercepting party, but doesn't protect against an active MITM attack.

Unfortunately uptake of DANE has been very slow, and it appears that most major email providers (e.g. Gmail, Outlook365, Yahoo, and many more) have no interest in supporting it at all. This severely reduces the incentive to implement as it would not improve protection for the majority of email.

Instead, providers appear to be converging on a SMTP MTA Strict Transport Security protocol, analogous to the HTTP Strict Transport Security feature that tells browsers to always use https:// when connecting to a website. It's likely this will get much greater traction. We're monitoring progress and intend to implement the standard when it is complete.

Along with a lack of sites supporting DANE, there are also a whole lot of scary implications about running a DNSSEC service. DNSSEC is fragile and easy to get wrong in subtle ways. A single small mistake can completely break DNS for your domain. And worse, in our case, break the DNS for the 10,000's of domains we host for our customers.

Even some of the biggest players make mistakes. APNIC, the RIR that allocates IP addresses for the entire Asia-Pacific region (so an important and core part of the internet), managed to mess up their DNSSEC for .arpa, meaning reverse DNS lookups for a large number of IP addresses failed for some time!

Not to mention DNSSEC outages at places like (National Institute of Standards and Technology) and even (that makes DNSSEC software and attempts to "drive adoption of Domain Name System Security Extensions (DNSSEC) to further enhance Internet security") has had multiple failures.

If the people that help run the internet or write the software and encourage the use of DNSSEC can't get it right, it's scary to think what non-experts could mess up. The litany of DNSSEC outages is only likely to increase, given the tiny amount of real world uptake it's had.

We're all for security and privacy, but part of that is ensuring availability to your email as well. We want to provide real useful benefits to users with low chance of things going wrong. At the moment, the risk trade-off profile for DNSSEC/DANE doesn't seem right to us.

Secure datacentre interlinks

Published 19 Dec 2016 by Bron Gondwana in FastMail Blog.

This is the nineteenth post in the 2016 Fastmail Advent Calendar. Stay tuned for another post tomorrow.

Securing the links between datacentres

We have always run physically separate switches for our internal and external networks at FastMail, and likewise run our own encryption for the links between our datacentres. I'm a strong believer in airgaps and in not trusting anything outside our own locked racks within the datacentre.

When I started at FastMail, we didn't have much "offsite". There was a real machine at another provider running secondary MX and DNS. For a while there was a VPS in Singapore that was purely a forwarding service to deal with routing issues in Asia.

We used OpenVPN with symmetric keys for these simple links. It was easy to set up, easy to trust (it uses heavily-reviewed TLS negotiation to set up the encryption), and I already knew and liked the underlying concept from using CIPE to link our offices at my previous job.

We had no need to use anything fancier for these low bandwidth needs.

Growing pains

When FastMail was purchased by Opera Software in 2010 we set up a secondary datacentre in Iceland with real time replication of all email. I did a ton of work in Cyrus IMAP to make replication more efficient, but it was still maxing out the CPU on the VPN machine.

OpenVPN is single threaded for simplicity of the code and robustness, but it meant that our shiny blade servers with 24 threads (2 processors x 6 cores x hyperthreading) were only using 1/24 of their capacity.

Software or Hardware?

Opera's sysadmin department tried to convince me to buy their favourite hardware (Juniper) rather than staying with software. Of course everything is hardware eventually, but running the VPN on the commodity hardware makes it easy to substitute and means we don't have to keep as many spares. There are always a couple of blades available to be repurposed if one of them has a hardware failure. Keeping a second high-end Juniper around just in case would have been very expensive, but not having a hot spare is untenable.

So it had to be software. The only serious contender was IPsec.

Black Swan

Debian has a page with some Linux IPsec history. There was FreeS/WAN (free secure wide area network) and then the KAME API for kernel access, and freeswan got abandoned and we had Openswan and strongSwan and libreswan, and there's KLIPS, and ... they're all awful.

After much frustration with bogus and incorrect documentation, I managed to get IPsec working. I never liked it; the implementions blatantly violate the Unix philosophy. OpenVPN gives you a tun device which acts like a virtual secure cable between your two datacentres that you can route down. IPsec (Openswan at least, which was the only one I could even get to work) routes the packets over the existing interface, but applies its magic in a way that the routing tools and firewall tools don't really understand. I was never 100% confident that the routing rules would keep unencypted packets off the external network in failure modes.

The configuration file was full of 'left' and 'right', and I certainly never figured out how to route arbitrary networks through the link, it looked like you had to set up a separate IPsec configuration for each network range you wanted to route. Config looked something like this:

conn nyi-to-thor
   left=[% conf.nyi.extip %]
   leftid=@[% conf.nyi.exthostname %]
   leftsourceip=[% conf.nyi.intip %]
   leftrsasigkey=[% conf.sigkey %]
   leftnexthop=[% conf.thor.extip %]
   right=[% conf.thor.extip %]
   rightid=@[% conf.thor.exthostname %]
   rightsourceip=[% conf.thor.intip %]
   rightrsasigkey=[% conf.sigkey %]
   rightnexthop=[% conf.nyi.extip %]

Note the subnet ranges embedded in the configuration. We didn't route anything but those network ranges through IPsec, using the Opera internal management ranges for everything else during the Opera years.

But the network in Iceland had availability issues and every time the network dropped out for a couple of minutes, IPsec would fail to resync and have to be restarted manually. Even regular failover for maintenance and testing (we always had two hosts configured and ready to go in case of machine failure) was unreliable.

Maybe I'm just really stupid and can't make IPsec work for me, or I backed the wrong swan, I dunno. Anyway, IPsec never sat well. It was the least reliable part of our infrastructure.

Back to OpenVPN

Nothing much had changed last year when I finally got jack of Openswan bailing on us and looked around to see if there was anything else. But I figured, if we ran multiple VPNs in parallel and routed traffic between them, that must work, right.

Which leads us to our current configuration, a mesh of OpenVPN links between our datacentres. We're currently running 4 channels for each pairing, though very little traffic goes along one of the edges.

Let's take a look at how it's done. First the heavily templated config file:

[%- SET conf = global.vpnlinks.$datacentre.$link -%]
[%- SET local = global.vpndata.$datacentre %]
[%- SET remote = global.vpndata.${conf.dest} %]
local [% local.extip %]
lport [% conf.lport %]
remote [% remote.extip %]
rport [% conf.rport %]
dev tun
ifconfig [% conf.lhost %] [% conf.rhost %]
ping 5
ping-restart 30
script-security 2
up /etc/openvpn/up-[% link %]
down /etc/openvpn/down-[% link %]
cipher [% conf.cipher %]
secret /etc/secure/openvpn/keys/[% conf.keyname %].key

Let's look at the interesting bits here. We run on different ports for each link so that we're running separate OpenVPN processes with no contention for the UDP ports.

I'm a bit cagey about showing IP addresses since we moved our VPN links on to hidden network ranges to avoid DDoS attacks taking out our networks, but let's take a look at the port numbers:

[brong@qvpn1 ~]$ grep port /etc/openvpn/*.conf
/etc/openvpn/nq1.conf:lport 6011
/etc/openvpn/nq1.conf:rport 6001
/etc/openvpn/nq2.conf:lport 6012
/etc/openvpn/nq2.conf:rport 6002
/etc/openvpn/nq3.conf:lport 6013
/etc/openvpn/nq3.conf:rport 6003
/etc/openvpn/nq4.conf:lport 6014
/etc/openvpn/nq4.conf:rport 6004
/etc/openvpn/sq1.conf:lport 7011
/etc/openvpn/sq1.conf:rport 7001
/etc/openvpn/sq2.conf:lport 7012
/etc/openvpn/sq2.conf:rport 7002
/etc/openvpn/sq3.conf:lport 7013
/etc/openvpn/sq3.conf:rport 7003
/etc/openvpn/sq4.conf:lport 7014
/etc/openvpn/sq4.conf:rport 7004

So each config has a local and remote port which is completely separate, but contiguous to allow us to firewall a port range on each machine.

The end result of all these settings is very reliable routing across restarts and connection failures.

Up and Down

The most interesting part is the up and down scripts. These set up the routing. I'll show the full up script, which contains all the functions, so it's enough to make this blog post useful for someone wanting to duplicate our setup.

[%- SET conf = global.vpnlinks.$datacentre.$link -%]
[%- SET local = global.vpndata.$datacentre %]
[%- SET remote = global.vpndata.${conf.dest} %]
use strict;
use warnings;
use IO::LockedFile;

# [% link %]

my $lock = IO::LockedFile->new(">/var/run/vpnroute.lock");

my $dev = shift;
set_queue_discipline($dev, "sfq");

[%- FOREACH netname = remote.networks %]
[%- SET data = $$netname %]
add_route("[% data.netblock %]", "[% conf.rhost %]", "[% $ %]");
[%- END %]

sub disable_rpfilter {
  my $dev = shift;
  print "echo 0 > /proc/sys/net/ipv4/conf/$dev/rp_filter\n";
  if (open(FH, ">", "/proc/sys/net/ipv4/conf/$dev/rp_filter")) {
    print FH "0\n";

sub set_queue_discipline {
  my $dev = shift;
  my $qdisc = shift;
  runcmd('/sbin/tc', 'qdisc', 'replace', 'dev', $dev, 'root', $qdisc);

sub add_route {
  my ($netblock, $rhost, $srcip) = @_;
  my @existing = get_routes($netblock);
  return if grep { $_ eq $rhost } @existing;
  my $cmd = @existing ? 'change' : 'add';
  push @existing, $rhost;
  runcmd('ip', 'route', $cmd, $netblock, 'src', $srcip, map { ('nexthop', 'via', $_) } @existing);

sub del_route {
  my ($netblock, $rhost, $srcip) = @_;
  my @existing = get_routes($netblock);
  return unless grep { $_ eq $rhost } @existing;
  @existing = grep { $_ ne $rhost } @existing;
  my $cmd = @existing ? 'change' : 'delete';
  runcmd('ip', 'route', $cmd, $netblock, 'src', $srcip, map { ('nexthop', 'via', $_) } @existing);

sub get_iproute {
  my @res = `ip route`;
  my %r;
  my $dst;
  foreach (@res) {
    if (s/^\s+//) {
      my @items = split;
      my $cat = shift @items;
      my %args = @items;
      push @{$r{$dst}{$cat}}, \%args if exists $r{$dst};
    my @items = split;
    $dst = shift @items;
    my %args = @items;
    $r{$dst} = \%args if !exists $args{dev} || $args{dev} =~ m/^tun\d+$/;
  return \%r;

sub get_routes {
  my $dst = shift;
  my $routes = get_iproute();
  my $nexthop = $routes->{$dst}{nexthop} || [$routes->{$dst}];
  return grep { $_ } map { $_->{via} } @$nexthop;

sub runcmd {
  my @cmd = @_;
  print "@cmd\n";

The down script is identical, except that it doesn't run the rpfilter or queue discipline lines, and of course it runs del_route instead of add_route.

Firstly we disable rp_filter on the interface. rp_filter drops any packets that wouldn't route to this same interface, and with multiple interfaces all routing the same network range, it would cause packets to fail to route. We still firewall the tun+ interfaces to only allow packets from our internal datacentre ranges of course.

Next we set the queue discipline to sfq, or "Stochastic Fairness Queueing", which is a low CPU usage hashing algorithm to distribute the load fairly across all links.

Since there's no way to add or remove routes directly, we take a global lock using the Perl IO::LockedFile module while reading and write routes, and hence we can just read the current routing table, manipulate it, and write out the new config. The lock is necessary because commonly all eight links on a machine get spun up at once, so they're likely to be making changes concurrently.

You can see in get_routes it has to handle two different styles of output from ip route, just a single destination when only one link is up, and also multiple nexthop lines.

So we have a list of nexthop routes with the same metric via the different OpenVPN links, and we manipulate that list and then tell the kernel to update the routing table.


Here's how it looks in the system routing table on qvpn1 in Quadranet, our LA datacentre. The links are 'sq' from Switch (Amsterdam) and 'nq' from NYI (New York).  src 
    nexthop via  dev tun0 weight 1
    nexthop via  dev tun1 weight 1
    nexthop via  dev tun2 weight 1
    nexthop via  dev tun3 weight 1 via dev eth0  metric 1  src 
    nexthop via  dev tun4 weight 1
    nexthop via  dev tun5 weight 1
    nexthop via  dev tun6 weight 1
    nexthop via  dev tun7 weight 1 via dev eth0  metric 1 dev tun0  proto kernel  scope link  src dev tun1  proto kernel  scope link  src dev tun3  proto kernel  scope link  src dev tun2  proto kernel  scope link  src dev tun7  proto kernel  scope link  src dev tun5  proto kernel  scope link  src dev tun4  proto kernel  scope link  src dev tun6  proto kernel  scope link  src 

(Quadra is .207, Switch is .206, NYI is .202)

If I take down a single one of the OpenVPN links, the routing just keeps working as we remove the one hop:

[brong@qvpn1 hm]$ /etc/init.d/openvpn stop sq2
Stopping virtual private network daemon: sq2.
[brong@qvpn1 hm]$ ip route | grep interesting  src 
    nexthop via  dev tun4 weight 1
    nexthop via  dev tun6 weight 1
    nexthop via  dev tun7 weight 1 dev tun7  proto kernel  scope link  src dev tun4  proto kernel  scope link  src dev tun6  proto kernel  scope link  src 

And then bring it back up again:

[brong@qvpn1 hm]$ /etc/init.d/openvpn start sq2
Starting virtual private network daemon: sq2.
[brong@qvpn1 hm]$ ip route | ...  src 
    nexthop via  dev tun4 weight 1
    nexthop via  dev tun6 weight 1
    nexthop via  dev tun7 weight 1
    nexthop via  dev tun5 weight 1

To see the commands that it ran, we can just run the up and down scripts directly:

[brong@qvpn1 hm]$ /etc/openvpn/down-sq2 tun5
ip route change src nexthop via nexthop via nexthop via
[brong@qvpn1 hm]$ /etc/openvpn/up-sq2 tun5
echo 0 > /proc/sys/net/ipv4/conf/tun5/rp_filter
/sbin/tc qdisc replace dev tun5 root sfq
ip route change src nexthop via nexthop via nexthop via nexthop via

Plenty of headroom

This is comfortably handling the load with the four links to NYI which get most of the traffic (it's quiet on the weekend while I'm writing this, and during busy times they might be using more CPU, but four cores is enough to supply our current bandwith peaks.)

  11533 root      20   0   24408   3856   3264 S  18.7  0.0   7120:19 openvpn   
  11521 root      20   0   24408   3912   3324 S  17.4  0.0   6690:50 openvpn   
  11527 root      20   0   24408   3908   3320 S  12.4  0.0   2741:46 openvpn   
  11539 root      20   0   24408   3796   3208 S   7.5  0.0   3749:41 openvpn 

There are heaps more CPUs available in the box if we need to spin up more concurrent links, and it's just a matter of adding an extra line to the network layout data file and then running make -C conf/openvpn install; /etc/init.d/openvpn start to bring the link up at each end. The routing algorithm will automatically spread the load once the two ends pair up.

We're much happier with our datacentre links now. We can manage firewalls and routes with our standard tooling and they are rock solid.

Arriving in Jordan

Published 18 Dec 2016 by Tom Wilson in tom m wilson.

I’ve arrived in the Middle East, in Jordan.  It is winter here.  Yesterday afternoon I visited the Amman Citadel, a raised acropolis in the centre of the capital. It lies atop a prominent hill in the centre of the city, and as you walk around the ruins of Roman civilisation you look down on box-like limestone-coloured apartment […]

Clone an abandoned MediaWiki site

Published 17 Dec 2016 by Bob Smith in Newest questions tagged mediawiki - Webmasters Stack Exchange.

Is there any way to clone a MediaWiki site that's been abandoned by the owner and all admins? None of the admins have been seen in 6 months and all attempts to contact any of them over the past 3-4 months have failed and the community is worried for the future of the Wiki. We have all put countless man-hours into the Wiki and to lose it now would be beyond devastating.

What would be the simplest way to go about this?


Manually upgrading Piwigo

Published 15 Dec 2016 by Sam Wilson in Sam's notebook.

There’s a new version of Piwigo out, and so I must upgrade. However, I’ve got things installed so that the web server doesn’t have write-access to the application files (as a security measure), and so I can’t use the built-in automatic upgrader.

I decided to switch to using Git to update the files, to make future upgrades much easier and without having to make anything writable by the server (even for some short amount of time).

First lock the site, via Tools > Maintenance -> Lock gallery, then get the new code:

$ git clone
$ cd
$ git checkout 2.8.3

Copy the following files:

/upload (this is a symlink on my system)

The following directories must be writable by the web server: /_data and /upload (including /upload/buffer; I was getting an “error during buffer directory creation” error).

Then browse to /upgrade.php to run any required database changes.

I’ve installed these plugins using Git as well: Piwigo-BatchDownloader, Flickr2Piwigo, and piwigo-openstreetmap. The OSM plugin also requires /osmmap.php to be created with the following (the plugin would have created it if it was allowed):

define( 'PHPWG_ROOT_PATH', './' );
include_once( PHPWG_ROOT_PATH . 'plugins/piwigo-openstreetmap/osmmap.php' );

That’s about. Maybe these notes will help me remember next time.


Published 13 Dec 2016 by fabpot in Tags from Twig.

Sri Lanka: The Green Island

Published 12 Dec 2016 by Tom Wilson in tom m wilson.

I just arrived in Tangalle.  What a journey… local bus from Galle Fort. Fast paced Hindi music, big buddha in the ceiling with flashing lights, another buddha on the dash board of the bus wrapped in plastic, a driver who swung the old 1970s Leyland bus around corners to the point where any more swing […]

Spices and Power in the Indian Ocean

Published 12 Dec 2016 by Tom Wilson in tom m wilson.

I’m in Galle, on the south-east coast of Sri Lanka. From the rooftop terrace above the hotel room I’m sitting in the sound of surf gently crumbling on the reef beyond the Fort’s ramparts can be heard, and the breathing Indian ocean is glimpsed through tall coconut trees. The old city juts out into the […]

wikidiff2 1.4.1

Published 7 Dec 2016 by legoktm in The Lego Mirror.

In MediaWiki 1.28, MaxSem improved diff limits in the pure PHP diff implementation that ships with MediaWiki core. However Wikimedia and other larger wikis use a PHP extension called wikidiff2, for better performance and additional support for Japanese, Chinese, and Thai.

wikidiff2 1.4.1 is now available in Debian unstable and will ship in stretch, and should soon be available in jessie-backports and my PPA for Ubuntu Trusty and Xenial users. This is the first major update of the package in two years. And installation in MediaWiki 1.27+ is now even more straightforward, as long as the module is installed, it will automatically be used, no global configuration required.

Additionally, releases of wikidiff2 will now be hosted and signed on

Tropical Architecture – Visiting Geoffrey Bawa’s Place

Published 6 Dec 2016 by Tom Wilson in tom m wilson.

I’ve arrived in Sri Lanka. Let me be honest: first impressions of Colombo bring forth descriptors like pushy, moustache-wearing, women-dominating, smog-covered, coarse, opportunistic and disheveled. It is not a city that anybody should rush to visit.  However this morning I found my way through this city to a tiny pocket of beauty and calm – the […]

Walking to the Mountain Monastery

Published 4 Dec 2016 by Tom Wilson in tom m wilson.

That little dot in the north west of south-east Asia is Chiang Mai.  As you can see there is a lot of darkness around it.  Darkness equals lots of forest and mountains. I’ve recently returned from the mountains to Chiang Mai.  Its very much a busy and bustling city, but even here people try to bring […]

Where is "MediaWiki:Vector.css" of my MediaWiki

Published 4 Dec 2016 by hasanghaforian in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I want to install Skin:Vector-DarkCSS on my MediaWiki. It must be simple, but second step of installation instructions syas I have to edit MediaWiki:Vector.css on my wiki. I searched for file with name MediaWiki:Vector.css, but could not found in MediaWiki home. Where is that file? Do I need to create that?

Forget travel guides.

Published 29 Nov 2016 by Tom Wilson in tom m wilson.

Lonely Planet talks up every country in the world, and if you read their guides every city and area seems to have a virtue worth singing. But the fact is that we can’t be everywhere and are forced to choose where to be as individuals on the face of this earth. And some places are just […]

MediaWiki VisualEditor Template autocomplete

Published 29 Nov 2016 by Patrick in Newest questions tagged mediawiki - Webmasters Stack Exchange.

Running MediaWiki 1.28, but I had this problem with 1.27, and was hoping it would be resolved.

I am using VisualEditor, and would like my users to be able to get an autocomplete when inserting a template.

I have TemplateData installed. And can confirm api.php is returning matches

62:{title: "Template:DefaultHeader", params: {},…}
117:{title: "Template:DefaultFooter", params: {},…}

But I don't get a drop down, and there is no errors in the debug console

Back That Thing Up

Published 29 Nov 2016 by Jason Scott in ASCII by Jason Scott.


I’m going to mention two backup projects. Both have been under way for some time, but the world randomly decided the end of November 2016 was the big day, so here I am.

The first is that the Internet Archive is adding another complete mirror of the Wayback machine to one of our satellite offices in Canada. Due to the laws of Canada, to be able to do “stuff” in the country, you need to set up a separate company from your US concern. If you look up a lot of major chains and places, you’ll find they all have Canadian corporations. Well, so does the Internet Archive and that separate company is in the process of getting a full backup of the Wayback machine and other related data. It’s 15 petabytes of material, or more. It will cost millions of dollars to set up, and that money is already going out the door.

So, if you want, you can go to the donation page and throw some money in that direction and it will make the effort go better. That won’t take very long at all and you can feel perfectly good about yourself. You need read no further, unless you have an awful lot of disk space, at which point I suggest further reading.


Whenever anything comes up about the Internet Archive’s storage solutions, there’s usually a fluttery cloud of second-guessing and “big sky” suggestions about how everything is being done wrong and why not just engage a HBF0_X2000-PL and fark a whoziz and then it’d be solved. That’s very nice, but there’s about two dozen factors in running an Internet Archive that explain why RAID-1 and Petabyte Towers combined with self-hosting and non-cloud storage has worked for the organization. There are definitely pros and cons to the whole thing, but the uptime has been very good for the costs, and the no-ads-no-subscription-no-login model has been working very well for years. I get it – you want to help. You want to drop the scales from our eyes and you want to let us know about the One Simple Trick that will save us all.

That said, when this sort of insight comes out, it’s usually back-of-napkin and done by someone who will be volunteering several dozen solutions online that day, and that’s a lot different than coming in for a long chat to discuss all the needs. I think someone volunteering a full coherent consult on solutions would be nice, but right now things are working pretty well.

There are backups of the Internet Archive in other countries already; we’re not that bone stupid. But this would be a full, consistently, constantly maintained full backup in Canada, and one that would be interfaced with other worldwide stores. It’s a preparation for an eventuality that hopefully won’t come to pass.

There’s a climate of concern and fear that is pervading the landscape this year, and the evolved rat-creatures that read these words in a thousand years will be able to piece together what that was. But regardless of your take on the level of concern, I hope everyone agrees that preparation for all eventualities is a smart strategy as long as it doesn’t dilute your primary functions. Donations and contributions of a monetary sort will make sure there’s no dilution.

So there’s that.

Now let’s talk about the backup of this backup a great set of people have been working on.


About a year ago, I helped launch INTERNETARCHIVE.BAK. The goal was to create a fully independent distributed copy of the Internet Archive that was not reliant on a single piece of Internet Archive hardware and which would be stored on the drives of volunteers, with 3 geographically distributed copies of the data worldwide.

Here’s the current status page of the project. We’re backing up 82 terabytes of information as of this writing. It was 50 terabytes last week. My hope is that it will be 1,000 terabytes sooner rather than later. Remember, this is 3 copies, so to do each terabyte needs three terabytes.

For some people, a terabyte is this gigantically untenable number and certainly not an amount of disk space they just have lying around. Other folks have, at their disposal, dozens of terabytes. So there’s lots of hard drive space out there, just not evenly distributed.

The IA.BAK project is a complicated one, but the general situation is that it uses the program git-annex to maintain widely-ranged backups from volunteers, with “check-in” of data integrity on a monthly basis. It has a lot of technical meat to mess around with, and we’ve had some absolutely stunning work done by a team of volunteering developers and maintainers (and volunteers) as we make this plan work on the ground.

And now, some thoughts on the Darkest Timeline.


I’m both an incredibly pessimistic and optimistic person. Some people might use the term “pragmatic” or something less charitable.

Regardless, I long ago gave up assumptions that everything was going to work out OK. It has not worked out OK in a lot of things, and there’s a lot of broken and lost things in the world. There’s the pessimism. The optimism is that I’ve not quite given up hope that something can’t be done about it.

I’ve now dedicated 10% of my life to the Internet Archive, and I’ve dedicated pretty much all of my life to the sorts of ideals that would make me work for the Archive. Among those ideals are free expression, gathering of history, saving of the past, and making it all available to as wide an audience, without limit, as possible. These aren’t just words to me.

Regardless of if one perceives the coming future as one rife with specific threats, I’ve discovered that life is consistently filled with threats, and only vigilance and dedication can break past the fog of possibilities. To that end, the Canadian Backup of the Internet Archive and the IA.BAK projects are clear bright lines of effort to protect against all futures dark and bright. The heritage, information and knowledge within the Internet Archive’s walls are worth protecting at all cost. That’s what drives me and why these two efforts are more than just experiments or configurations of hardware and location.

So, hard drives or cash, your choice. Or both!

Countryman – Retreating to North-West Thailand

Published 29 Nov 2016 by Tom Wilson in tom m wilson.

Made it to Cave Lodge in the small village of Tham Lot.  The last time I was here was seven years ago. I’m sitting on a hammock above the softly flowing river and the green valley. A deeply relaxing place. I arrived here a few days ago. We came on our motorbike taxis from the main […]

De Anza students football fandoms endure regardless of team success

Published 28 Nov 2016 by legoktm in The Lego Mirror.

Fans of the San Francisco 49ers and Oakland Raiders at De Anza College are loyal to their teams even when they are not doing well, but do prefer to win.

The Raiders lead the AFC West with a 9-2 record, while the 49ers are last in the NFC West with a 1-10 record. This is a stark reversal from 2013, when the 49ers were competing in the Super Bowl and the Raiders finished the season with a 4-12 record, as reported by The Mercury News.

49ers fans are not bothered though.

“My entire family is 49ers fans, and there is no change in our fandom due to the downturn,” said Joseph Schmidt.

Schmidt recently bought a new 49ers hat that he wears around campus.

Victor Bejarano concurred and said, “I try to watch them every week, even when they’re losing.”

A fan since 2011, he too wears a 49ers hat around campus to show his support for the team.

Sathya Reach said he has stopped watching the 49ers play not because of their downfall, but because of an increased focus on school.

“I used to watch (the 49ers) with my cousins, not so much anymore,” Reach said.

Kaepernick in 2012 Mike Morbeck/CC-BY-SA

Regardless of their support, 49ers fans have opinions on how the team is doing, mostly about 49ers quarterback Colin Kaepernick. Kaepernick protests police brutality against minorities before each game by kneeling during the national anthem. His protest placed him on the cover of TIME magazine, and ranked as the most disliked player in the NFL in a September poll conducted by E-Poll Marketing Research.

Bejarano does not follow Kaepernick’s actions off the field, but said that on the field, Kaepernick was not getting the job done.

“He does what he does, and has his own reasons,” Reach said.

Self-described Raider “fanatic” Mike Nijmeh agreed, calling Kaepernick a bad quarterback.

James Stewart, a Raiders’ fan since 5 years old, disagreed and said, “I like Kaepernick, and wouldn’t mind if he was a Raiders’ backup quarterback.”

Reader Poll: Could Derek Carr be the MVP this year?
Maybe in 5 years
Tom Brady

Both Nijmeh and Stewart praised the Raiders' quarterback, Derek Carr, and Nijmeh, dressed in his Raiders hat, jacket and jersey, said, “Carr could easily be the MVP this year.”

Stewart said that while he also thought Carr is MVP caliber, Tom Brady, the quarterback of the New England Patriots, is realistically more likely to win.

“Maybe in five years,” said Stewart, explaining that he expected Brady to have retired by then.

He is not the only one, as Raider teammate Khalil Mack considers Carr to be a potential MVP, reported USA Today. USA Today Sports’ MVP tracker has Carr in third.

Some 49ers fans are indifferent about the Raiders, others support them because of simply being in the Bay Area, and others just do not like them.

Bejarano said that he supports the Raiders because they are a Bay Area team, but that it bothers him that they are doing so well in contrast to the 49ers.

Nijmeh summed up his feelings by saying the Raiders’ success has made him much happier on Sundays.

Related Stories:

Updates 1.2.3 and 1.1.7 released

Published 27 Nov 2016 by Roundcube Webmail Dev Team in Roundcube Webmail Project News.

We just published another update to the both stable versions 1.2 and 1.1 delivering important bug fixes and improvements which we picked from the upstream branch.

Included is a fix for a recently reported security issue when using PHP’s mail() function. It has been discovered by Robin Peraglie using RIPS and more details along with a CVE number will be pulished shortly.

See the full changelog for 1.2.3 in the wiki. Version 1.1.7 is a security update fixing the mail() issue and thus only relevant to Roundcube installations not having an SMTP server configured for mail delivery.

Both versions are considered stable and we recommend to update all productive installations of Roundcube with either of these versions. Download them from GitHub via

As usual, don’t forget to backup your data before updating!


Published 26 Nov 2016 by mblaney in Tags from simplepie.

Merge pull request #495 from mblaney/master

New release 1.4.3

Karen Village Life

Published 26 Nov 2016 by Tom Wilson in tom m wilson.

The north-west corner of Thailand is the most sparsely populated corner of the country.  Mountains, forests and rivers, as far as the eye can see.  And sometimes a village. This village is called Menora.  Its a Karen village, without electricity or running water.  Its very, very remote and not mapped on Google Maps. Living out […]


Published 23 Nov 2016 by fabpot in Tags from Twig.

Thai Forest Buddhism

Published 22 Nov 2016 by Tom Wilson in tom m wilson.

  The forests of Thailand have been retreat for, particularly since the 1980s.  Forest monks, who go to the forests to meditate, have seen their home get smaller and smaller.  In some cases this has prompted them to become defenders of the forest, for example performing tree ordination ceremonies, effectively ordaining a tree in saffron robes […]

Open Source at DigitalOcean: Introducing go-qemu and go-libvirt

Published 21 Nov 2016 by DigitalOcean in DigitalOcean Blog.

At DigitalOcean, we use libvirt with QEMU to create and manage the virtual machines that compose our Droplet product. QEMU is the workhorse that enables hundreds of Droplets to run on a single server within our data centers. To perform management actions (like powering off a Droplet), we originally built automation which relied on shelling out to virsh, a command-line client used to interact with the libvirt daemon.

As we began to deploy Go into production, we realized we would need simple and powerful building blocks for future Droplet management tooling. In particular, we wanted packages with:

We explored several open source packages for managing libvirt and QEMU, but none of them were able to completely fulfill our wants and needs, so we created our own: go-qemu and go-libvirt.

How Do QEMU and go-qemu Work?

QEMU provides the hardware emulation layer between Droplets and our bare metal servers. Each QEMU process provides a JSON API over a UNIX or TCP socket, much like a REST API you might find when working with web services. However, instead of using HTTP, it communicates over a protocol known as the QEMU Monitor Protocol (QMP). When you request an action, like powering off a Droplet, the request eventually makes its way to the QEMU process via the QMP socket in the form of { "execute" : "system_powerdown" }.

go-qemu is a Go package that provides a simple interface for communicating with QEMU instances over QMP. It enables the management of QEMU virtual machines directly, using either the monitor socket of a VM or by proxying the request through libvirt. All go-qemu interactions rely on the qemu.Domain and qmp.Monitor types. A qemu.Domain is constructed with an underlying qmp.Monitor, which understands how to speak to the monitor socket of a given VM.

How Do libvirt and go-libvirt Work?

libvirt was designed for client-server communication. Users typically interact with the libvirt daemon through the command-line client virsh. virsh establishes a connection to the daemon either through a local UNIX socket or a TCP connection. Communication follows a custom asynchronous protocol whereby each RPC request or response is preceded by a header describing the incoming payload. Most notably, the header contains a procedure identifier (e.g,. "start domain"), the type of request (e.g., call or reply), and a unique serial number used to correlate RPC calls with their respective responses. The payload following the header is XDR encoded providing an architecture-agnostic method for describing strict data types.

go-libvirt is a Go package which provides a pure Go interface to libvirt. go-libvirt can be used in conjunction with go-qemu to manage VMs by proxying communication through the libvirt daemon.

go-libvirt exploits the availability of the RPC protocol to communicate with libvirt without the need for cgo and C bindings. While using the libvirt's C bindings would be easier up front, we try to avoid cgo when possible. Dave Cheney has written an excellent blog post which mirrors many of our own findings. A pure Go library simplifies our build pipelines, reduces dependency headaches, and keeps cross-compilation simple.

By circumventing the C library, we need to keep a close eye on changes in new libvirt releases; libvirt developers may modify the RPC protocol at any time, potentially breaking go-libvirt. To ensure stability and compatibility with various versions of libvirt, we install and run it within Travis CI which allows integration tests to be run for each new commit to go-libvirt.


The following code demonstrates usage of go-qemu and go-libvirt to interact with all libvirt-managed virtual machines on a given hypervisor.

package main

import (


func main() {
    driver := hypervisor.NewRPCDriver(func() (net.Conn, error) {
        return net.DialTimeout("unix", "/var/run/libvirt/libvirt-sock", 2*time.Second)

    hv := hypervisor.New(driver)

    fmt.Println("Domain\t\tQEMU Version")
    domains, err := hv.Domains()
    if err != nil {

    for _, dom := range domains {
        version, err := dom.Version()
        if err != nil {

        fmt.Printf("%s\t\t%s\n", dom.Name, version)


Domain        QEMU Version
Droplet-1        2.7.0
Droplet-2        2.6.0
Droplet-3        2.5.0

What's Next?

Both go-qemu and go-libvirt are still under active development, in the future, we intend to provide an optional cgo QMP monitor which wraps the libvirt C API using the libvirt-go package.

go-qemu and go-libvirt are used in production at DigitalOcean, but the APIs should be treated as unstable, and we recommend that users of these packages vendor them into their applications.

We welcome contributions to the project! In fact, a recent major feature in the go-qemu project was contributed by an engineer outside of DigitalOcean. David Anderson is working on a way to automatically generate QMP structures using the QMP specification in go-qemu. This will save an enormous amount of tedious development and enables contributors to simply wrap these raw types in higher-level types to provide a more idiomatic interface to interact with QEMU instances.

If you'd like to join the fun, feel free to open a GitHub pull-request, file an issue, or join us on IRC (freenode/#go-qemu).

Edit: as clarified by user "eskultet" in our IRC channel, libvirt does indeed guarantee API and ABI stability, and the RPC layer is able to detect any extra or missing elements that would cause the RPC payload to not meet a fixed size requirement. This blog has been updated to reflect this misunderstanding.

In Which I Tell You It’s A Good Idea To Support a Magazine-Scanning Patreon

Published 20 Nov 2016 by Jason Scott in ASCII by Jason Scott.

So, Mark Trade and I have never talked, once.

All I know about Mark is that due to his efforts, over 200 scans of magazines are up on the Archive.


These are very good scans, too. The kind of scans that a person looking to find a long-lost article, verify a hard-to-grab fact, or needs to pass along to others a great image would kill to have. 600 dots per inch, excellent contrast, clarity, and the margins cut just right.


So, I could fill this entry with all the nice covers, but covers are kind of easy, to be frank. You put them face down on the scanner, you do a nice big image, and then touch it up a tad. The cover paper and the printing is always super-quality compared to the rest, so it’ll look good:


But the INSIDE stuff… that’s so much harder. Magazines were often bound in a way that put the images RIGHT against the binding and not every magazine did the proper spacing and all of it is very hard to shove into a scanner and not lose some information. I have a lot of well-meaning scans in my life with a lot of information missing.

But these…. these are primo.




When I stumbled on the Patreon, he had three patrons giving him $10 a month. I’d like it to be $500, or $1000. I want this to be his full-time job.

Reading the patreon page’s description of his process shows he’s taking it quite seriously. Steaming glue, removing staples. I’ve gone on record about the pros and cons of destructive scanning, but game magazines are not rare, just entirely unrepresented in scanned items compared to how many people have these things in their past.

I read something like this:

It is extremely unlikely that I will profit from your pledge any time soon. My scanner alone was over $4,000 and the scanning software was $600. Because I’m working with a high volume of high resolution 600 DPI images I purchased several hard drives including a CalDigit T4 20TB RAID array for $2,000. I have also spent several thousand dollars on the magazines themselves, which become more expensive as they become rarer. This is in addition to the cost of my computer, monitor, and other things which go into the creation of these scans. It may sound like I’m rich but really I’m just motivated, working two jobs and pursuing large projects.

…and all I think about is, this guy is doing so much amazing work that so many thousands could be benefiting from, and they should throw a few bucks at him for his time.

My work consists of carefully removing individual pages from magazines with a heat gun or staple-remover so that the entire page may be scanned. Occasionally I will use a stack paper cutter where appropriate and will not involve loss of page content. I will then scan the pages in my large format ADF scanner into 600 DPI uncompressed TIFFs. From there I either upload 300 DPI JPEGs for others to edit and release on various sites or I will edit them myself and store the 600 DPI versions in backup hard disks. I also take photos of magazines still factory-sealed to document their newsstand appearance. I also rip full ISOs of magazine coverdiscs and make scans of coverdisc sleeves on a color-corrected flatbed scanner and upload those to as well.

This is the sort of thing I can really get behind.

The Internet Archive is scanning stuff, to be sure, but the focus is on books. Magazines are much, much harder to scan – the book scanners in use are just not as easy to use with something bound like magazines are. The work that Mark is doing is stuff that very few others are doing, and to have canonical scans of the advertisements, writing and materials from magazines that used to populate the shelves is vital.

Some time ago, I’ve given all my collection of donated Game-related magazines to the Museum of Art and Digital Entertainment, because I recognized I couldn’t be scanning them anytime soon, and how difficult it was going to be to scan it. It would take some real major labor I couldn’t personally give.

Well, here it is. He’s been at it for a year. I’d like to see that monthly number jump to $100/month, $500/month, or more. People dropping $5/month towards this Patreon would be doing a lot for this particular body of knowledge.

Please consider doing it.


A Simple Explanation: VLC.js

Published 17 Nov 2016 by Jason Scott in ASCII by Jason Scott.

The previous entry got the attention it needed, and the maintainers of the VLC project connected with both Emularity developers and Emscripten developers and the process has begun.

The best example of where we are is this screenshot:


The upshot of this is that a javascript compiled version of the VLC player now runs, spits out a bunch of status and command line information, and then gets cranky it has no video/audio device to use.

With the Emularity project, this was something like 2-3 months into the project. In this case, it happened in 3 days.

The reasons it took such a short time were multi-fold. First, the VLC maintainers jumped right into it at full-bore. They’ve had to architect VLC for a variety of wide-ranging platforms including OSX, Windows, Android, and even weirdos like OS/2; to have something aimed at “web” is just another place to go. (They’d also made a few web plugins in the past.) Second, the developers of Emularity and Emscripten were right there to answer the tough questions, the weird little bumps and switchbacks.

Finally, everybody has been super-energetic about it – diving into the idea, without getting hung up on factors or features or what may emerge; the same flexibility that coding gives the world means that the final item will be something that can be refined and improved.

So that’s great news. But after the initial request went into a lot of screens, a wave of demands and questions came along, and I thought I’d answer some of them to the best of my abilities, and also make some observations as well.


When you suggest something somewhat crazy, especially in the programming or development world, there’s a variant amount of response. And if you end up on Hackernews, Reddit, or a number of other high-traffic locations, those reactions fall into some very predictable areas:

So, quickly on some of these:

But let’s shift over to why I think this is important, and why I chose VLC to interact with.

First, VLC is one of those things that people love, or people wish there was something better than, but VLC is what we have. It’s flexible, it’s been well-maintained, and it has been singularly focused. For a very long time, the goal of the project has been aimed at turning both static files AND streams into something you can see on your machine. And the machine you can see it on is pretty much every machine capable of making audio and video work.

Fundamentally, VLC is a bucket that, when dropped into with a very large variance of sound-oriented or visual-oriented files and containers, will do something with them. DVD ISO files become playable DVDs, including all the features of said DVDs. VCDs become craptastic but playable DVDs. MP3, FLAC, MIDI, all of them fall into VLC and start becoming scrubbing-ready sound experiences. There are quibbles here and there about accuracy of reproduction (especially with older MOD-like formats like S3M or .XM) but these are code, and fixable in code. That VLC doesn’t immediately barf on the rug with the amount of crapola that can be thrown at it is enormous.

And completing this thought, by choosing something like VLC, with its top-down open source condition and universal approach, the “closing of the loop” from VLC being available in all browsers instantly will ideally cause people to find the time to improve and add formats that otherwise wouldn’t experience such advocacy. Images into Apple II floppy disk image? Oscilloscope captures? Morse code evaluation? Slow Scan Television? If those items have a future, it’s probably in VLC and it’s much more likely if the web uses a VLC that just appears in the browser, no fuss or muss.


Fundamentally, I think my personal motivations are pretty transparent and clear. I help oversee a petabytes-big pile of data at the Internet Archive. A lot of it is very accessible; even more of it is not, or has to have clever “derivations” pulled out of it for access. You can listen to .FLACs that have been uploaded, for example, because we derive (noted) mp3 versions that go through the web easier. Same for the MPG files that become .mp4s and so on, and so on. A VLC that (optionally) can play off the originals, or which can access formats that currently sit as huge lumps in our archives, will be a fundamental world changer.

Imagine playing DVDs right there, in the browser. Or really old computer formats. Or doing a bunch of simple operations to incoming video and audio to improve it without having to make a pile of slight variations of the originals to stream. VLC.js will do this and do it very well. The millions of files that are currently without any status in the archive will join the millions that do have easy playability. Old or obscure ideas will rejoin the conversation. Forgotten aspects will return. And VLC itself, faced with such a large test sample, will get better at replaying these items in the process.

This is why this is being done. This is why I believe in it so strongly.


I don’t know what roadblocks or technical decisions the team has ahead of it, but they’re working very hard at it, and some sort of prototype seems imminent. The world with this happening will change slightly when it starts working. But as it refines, and as these secondary aspects begin, it will change even more. VLC will change. Maybe even browsers will change.

Access drives preservation. And that’s what’s driving this.

See you on the noisy and image-filled other side.

What do "Pro" users want?

Published 16 Nov 2016 by Carlos Fenollosa in Carlos Fenollosa — Blog.

My current machine is a 2013 i7 Macbook Air. It doesn't have the Pro label, however, It has two USB 3.0 ports, an SD slot and a Thunderbolt port. 12 hours of battery life. One of the best non-retina screens around. Judging by this week's snarky comments, it's more Pro than the 2016 Macbook Pro.

Me, I love this laptop. In fact, I love it so much that I bought it to replace an older MBA. I really hoped that Apple would keep selling the same model with a Retina screen and bumped specs.

But is it a Pro computer or not? Well, let me twist the language. I make my living with computers, so by definition it is. Let's put it another way around: I could have spent more money for a machine which has Pro in its name, but that wouldn't have improved my work output.

What is a Pro user?

So there's this big discussion on whether the Pro label means anything for Apple.

After reading dozens of reviews and blog posts, unsurprisingly, one discovers that different people have different needs. The bottom line is that a Pro user is someone who needs to get their work done and cannot tolerate much bullshit with their tools.

In my opinion, the new Macbook Pros are definitely a Pro machine, even with some valid criticisms. Apple product releases are usually followed by zesty discussions, but this time it's a bit different. It's not only angry Twitter users who are complaining; professional reviewers, engineers, and Pro users have also voiced their concerns.

I think we need to stop thinking that Apple is either stupid or malevolent. They are neither. As a public company, the metric by which their executives are evaluated is stock performance. Infuriating users for no reason only leads to decreasing sales, less benefits, and unhappy investors.

I have some theories on why Apple seems to care less about the Mac, and why many feel the need to complain.

Has the Pro market changed?

Let's be honest: for the last five years Apple probably had the best and most popular computer lineup and pricing in their history. All markets (entry, pro, portability, desktops) had fantastic machines which were totally safe to buy and recommend, at extremely affordable prices.

I've seen this myself. In Spain, as one of the poorest EU countries, Apple is not hugely popular. Macs and iPhones are super expensive, and many find it difficult to justify an Apple purchase on their <1000€ salary.

However, in the last three to five years, everybody seemed to buy a Mac, even friends of mine who swore they would never do it. They finally caved in, not because of my advice, but because their non-nerd friends recommend MBPs. And that makes sense. In a 2011 market saturated by ultraportables, Windows 8, and laptops which break every couple years, Macs were a great investment. You can even resell them after five years for 50% of their price, essentially renting them for half price.

So what happened? Right now, not only Pros are using the Macbook Pro. They're not a professional tool anymore, they're a consumer product. Apple collects usage analytics for their machines and, I suppose, makes informed decisions, like removing less used ports or not increasing storage on iPhones for a long time.

What if Apple is being fed overwhelmingly non-Pro user data for their Pro machines and, as a consequence, their decisions don't serve Pro users anymore, but rather the general public?

First, let's make a quick diversion to address the elephant in the room because, after all, I empathize with the critics.

Apple is Apple

Some assertions you can read on the Internet seem out of touch with a company which made the glaring mistake of building a machine without a floppy, released a lame mp3 player without wireless and less space than a Nomad, tried to revolutionize the world with a phone without a keyboard, and produced an oversized iPhone which is killing the laptop in the consumer market.

Apple always innovates. You can agree whether the direction is correct, but they do. They also copy, and they also steal, like every other company.

What makes them stand out is that they are bolder, dare I say, more courageous than others, to the point of having the courage to use the word courage to justify an unpopular technical decision.

They take more risks on their products. Yes, I think that the current audio jack transition could've been handled better, but they're the first "big brand" to always make such changes on their core products.

This brings us to my main gripe with the current controversy. I applaud their strategy of bringing iPhone ideas, both hardware and software, to the Mac. That is a fantastic policy. You can design a whole device around a touch screen and a secure enclave, then miniaturize it and stick it on a Macbook as a Touch Bar.

Having said that, us pros are generally conservative: we don't update our OS until versions X.1 or X.2, we need all our tools to be compatible, and we don't usually buy first-gen products, unless we self-justify our new toy as a "way to test our app experience on users who have this product".

The Great Criticism Of The 2016 Macbook Pro is mainly fueled by customers who wanted something harder, better, faster, stronger (and cheaper) and instead they got a novel consumer machine with few visible Pro improvements over the previous one and some prominent drawbacks.

Critical Pros are disappointed because they think Apple no longer cares about them. They feel they have no future using products from this company they've long invested in. Right now, there is no clear competitor to the Mac, but if it were, I'm sure many people would vote with their wallets to the other guy.

These critics aren't your typical Ballmers bashing the iPhone out of spite. They are concerned, loyal customers who have spent tens of thousands of dollars in Apple's products.

What's worse, Apple doesn't seem to understand the backlash, as shown by recent executive statements. Feeling misunderstood just infuriates people more, and there are few things as powerful as people frustrated and disappointed with the figures and institutions they respect.

Experiment, but not on my lawn

If I could ask Apple for just one thing, it would be to restrict their courage to the consumer market.

'Member the jokes about the 2008 Macbook Air? Only one port, no DVD drive?

The truth is, nobody cared because that machine was clearly not for them; it was an experiment, which if I may say so, turned out to be one of the most successful ever. Eight years later, many laptops aspire to be a Macbook Air, and the current entry Apple machine, the Macbook "One", is only an iteration on that design.

Nowadays, Apple calls the Retina MBA we had been waiting for a "Macbook Pro". That machine has a 15W CPU, only two ports—one of which is needed for charging—, good enough internals, and a great battery for light browsing which suffers on high CPU usage.

But when Apple rebrands this Air as a Pro, real pros get furious, because that machine clearly isn't for them. And this time, to add more fuel to the fire, the consumer segment gets furious too, since it's too expensive, to be exact, $400 too expensive.

By making the conscious decision of positioning this as a Pro machine both in branding and price point, Apple is sending the message that they really do consider this a Pro machine.

One unexpected outcome of this crisis

Regardless, there is one real, tangible risk for Apple.

When looking at the raw numbers, what Apple sees is this: 70% of their revenue comes from iOS devices. Thus, they prioritize around 70% of company resources to that segment. This makes sense.


Unless there is an external factor which drives iPhone sales: the availability of iPhone software, which is not controlled by Apple. This software is developed by external Pros. On Macs.

The explosion of the iOS App Store has not been a coincidence. It's the combination of many factors, one of which is a high number of developers and geeks using a Mac daily, thanks to its awesomeness and recent low prices. How many of us got into iPhone development just because Xcode was right there in our OS?

Similarly to how difficult it is to find COBOL developers because barely anyone learns it anymore, if most developers, whichever their day job is, start switching from a Mac to a PC, the interest for iOS development will dwindle quickly.

In summary, the success of the iPhone is directly linked to developer satisfaction with the Mac.

This line of reasoning is not unprecedented. In the 90s, almost all developers were using the Microsoft platform until Linux and OSX appeared. Nowadays, Microsoft is suffering heavily for their past technical decisions. Their mobile platform crashed not because the phones were bad, but because they had no software available.

Right now, Apple is safe, and Pro users will keep using Macs not only thanks to Jobs' successful walled garden strategy, but also because they are the best tools for the job.

While Pro users may not be trend-setters, they win in the long term. Linux won in the server. Apple won the smartphone race because it had already won the developer race. They made awesome laptops and those of us who were using Linux just went ahead and bought a Mac.

Apple thinks future developers will code on iPads. Maybe that's right 10 years from now. The question is, can they save this 10-year gap between current developers and future ones?

The perfect Pro machine

This Macbook Pro is a great machine and, with USB-C ports, is future proof.

Dongles and keyboards are a scapegoat. Criticisms are valid, but I feel they are unjustly directed to this specific machine instead of Apple's strategy in general. Or, at least, the tiny part that us consumers see.

Photographers want an SD slot. Developers want more RAM for their VMs. Students want lower prices. Mobile professionals want an integrated LTE chip. Roadies want more battery life. Here's my wish, different than everybody else's: I want the current Macbook Air with a Retina screen and 20 hours of battery life (10 when the CPU is peaking)

Everybody seems to be either postulating why this is not a Pro machine or criticizing the critics. And they are all right.

Unfortunately, unless given infinite resources, the perfect machine will not exist. I think the critics know that, even if many are projecting their rage on this specific machine.

A letter to Santa

Pro customers, myself included, are afraid that Apple is going to stab them on the back in a few years, and Apple is not doing anything substantial to reduce these fears.

In computing, too, perception is as important as cold, hard facts.

Macs are a great UNIX machine for developers, have a fantastic screen for multimedia Pros, get amazing build quality value for budget constrained self-employed engineers, work awesomely with audio setups thanks to almost inaudible fans, triple-A software is available, and you can even install Windows.

We have to admit that us Pros are mostly happily locked in the Apple ecosystem. When we look for alternatives, in many cases, we only see crap. And that's why we are afraid. Is it our own fault? Of course, we are all responsible for our own decisions. Does this mean we have no right to complain?

Apple, if you're listening, please do:

  1. Remember that you sell phones because there's people developing apps for them.
  2. Ask your own engineers which kind of machine they'd like to develop on. Keep making gorgeous Starbucks ornaments if you wish, but clearly split the product lines and the marketing message so all consumers feel included.
  3. Many iOS apps are developed outside the US and the current price point for your machines is too high for the rest of the world. I know we pay for taxes, but even when accounting for that, a bag of chips, an apartment, or a bike doesn't cost the same in Manhattan than in Barcelona.
  4. Keep making great hardware and innovating, but please, experiment with your consumer line, not your Pro line.
  5. Send an ACK to let us Pros recover our trust in you. Unfortunately, at this point, statements are not enough.

Thank you for reading.

Tags: hardware, apple

Comments? Tweet  

Sukhothai: The Dawn of Happiness

Published 16 Nov 2016 by Tom Wilson in tom m wilson.

  It is early morning in Sukhothai, the first capital of present day Thailand, in the north of the country.  From the Sanskrit, Sukhothai means ‘dawn of happiness’.  The air is still cool this morning, and the old city is empty of all but two or three tourists.  Doves coo gently from ancient stone rooftops. […]

Open Source at Its (Hacktober)best

Published 15 Nov 2016 by DigitalOcean in DigitalOcean Blog.

The third-annual Hacktoberfest, which wrapped up October 31, brought a community of project maintainers, seasoned contributors, and open-source beginners together to give back to many great projects. It was a record setting year which confirmed the power of communities in general, and specifically the open source community.

Here's what you accomplished in a nutshell:

In this post, we'll get more into numbers and will share some stories from contributors, maintainers, and communities across the world.


We put the challenge out there and you stepped up to exceed it! Congratulations to both first-time open source contributors and experienced contributors who set aside time and resources to push the needle forward for thousands of open source projects.

This year, we had a record number of contributors from around the world participate:

Developers around the world shared their stories with us, explaining what Hacktoberfest meant to them. One contributor who completed the challenge said:

I am a senior computer science student but have always been too intimidated to submit to other open github projects. Hacktoberfest gave me a reason to do that and I am really glad I did. I will for sure be submitting a lot more in the future.

Aditya Dalal from Homebrew Cask went from being a Hacktoberfest contributor in 2015 to being a project maintainer in 2016:

I actually started contributing to Open Source in a meaningful way because of Hacktoberfest. Homebrew Cask was a convenient tool in my daily usage, and Hacktoberfest provided an extra incentive to contribute back. Over time, I continued contributing and ended up as a maintainer, focusing on triaging issues and making the contribution process as simple as possible (which I like to think we have succeeded at).


A HUGE and very special shout out goes out to project maintainers. Many of you added "Hacktoberfest" labels (+15,000) to project issues and tweeted out your projects, encouraging others to join in on the fun. We know that Hacktoberfest makes things busier than usual. Thank you for setting a great example for future project maintainers—without you, Hacktoberfest wouldn't be possible!

Some maintainers went out of their way to make sure contributors had a great experience:

…and others created awesome challenges:


This year, we wanted to highlight the collaborative aspect of open source and created a Hacktoberfest-themed Meetup Kit with tips and tools for anyone who wanted to organize a Hacktoberfest event.

As a result, Hacktoberfest meetups popped up all over the world. More than 30 communities held 40 events in 29 cities across 12 countries including Cameroon,Canada, Denmark, Finland, France, India, Kenya, New Zealand, Spain, Ukraine, UK, and the US (click here to see a full list of Hacktoberfest events).

Thank you to event organizers who brought your communities together through pair programming, mentorship, demos, workshops, and hack fests.

If you didn't have a chance to attend a Hacktoberfest-themed event near you, we encourage you to host one anytime or suggest the idea to your favorite meetup.

Hacktoberfest Paris Meetup by SigfoxFullstack Open Source | Hacktober Edition, Los Angeles, California, USA
Hacktober Night by BlackCodeCollective, Arlington, Virginia, USAHacktober Fest Meetup at NITK Surathkal, Mangalore, India

Clockwise, from top left: Hacktoberfest Paris Meetup by Sigfox, Paris, France, Fullstack Open Source | Hacktober Edition, Los Angeles, California, USA, Hacktober Fest Meetup at NITK Surathkal, Mangalore, India, and Hacktober Night by BlackCodeCollective, Arlington, Virginia, USA.

Beyond 2016

Thank you to our friends at GitHub for helping us make Hacktoberfest 2016 possible. And special thanks go out to our friends at Mozilla, Intel, and CoreOS for supporting the initiative.

Tell us: What did you enjoy about Hacktoberfest this year? What can we do to make it even better next year? Let us know in the comments.

Until we meet again—happy hacking!

What is the mediawiki install path on Ubuntu when you install it from the Repos?

Published 15 Nov 2016 by Akiva in Newest questions tagged mediawiki - Ask Ubuntu.

What is the mediawiki install path on Ubuntu when you install it from the Repos?

Specifically looking for the extensions folder.

Working the polls: reflection

Published 9 Nov 2016 by legoktm in The Lego Mirror.

As I said earlier, I worked the polls from 6 a.m. to roughly 9:20 p.m. We had one voter come in at the nick of time at 7:59 p.m.

I was glad to see that we had a lot of first time voters, as well as some who just filled out one issue on the three(!) page ballot, and then left. Overall, I've come to the conclusion that everyone is just like me and votes just to get a sticker. We had quite a few people who voted by mail and stopped by just to get their "I voted!" sticker.

I should get paid $145 for working, which I shall be donating to And I plan to be helping out during the next election!

HSTS header not being sent though rule is present and mod_headers is enabled

Published 5 Nov 2016 by jww in Newest questions tagged mediawiki - Server Fault.

We enabled HSTS in httpd.conf in the Virtual Host handling port 443. We tried with and without the <IfModule mod_headers.c>:

<IfModule mod_headers.c>
    Header set Strict-Transport-Security "max-age=10886400; includeSubDomains"

But the server does not include the header in a response. Below is from curl over HTTPS:

> GET / HTTP/1.1
> Host:
> User-Agent: curl/7.51.0
> Accept: */*
< HTTP/1.1 200 OK
< Date: Sat, 05 Nov 2016 22:49:25 GMT
< Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips
< Last-Modified: Wed, 02 Nov 2016 01:27:08 GMT
< ETag: "8988-5404756e12afc"
< Accept-Ranges: bytes
< Content-Length: 35208
< Vary: Accept-Encoding
< Content-Type: text/html; charset=UTF-8

The relevant section of httpd.conf is shown below. The cURL transcript is shown below. Apache shows mod_header is loaded, and grepping all the logs don't reveal an error.

The Apache version is Apache/2.4.6 (CentOS). The PHP version is 5.4.16 (cli) (built: Aug 11 2016 21:24:59). The Mediawiki version is 1.26.4.

What might be the problem here, and how could I solve this?


<VirtualHost *:80>
    ServerAlias * *.cryptopp.*

    <IfModule mod_rewrite.c>
        RewriteEngine On
        RewriteCond %{REQUEST_METHOD} ^TRACE
        RewriteRule .* - [F]
        RewriteCond %{REQUEST_METHOD} ^TRACK
        RewriteRule .* - [F]
        #redirect all port 80 traffic to 443
        RewriteCond %{SERVER_PORT} !^443$
        RewriteRule ^/?(.*)$1 [L,R]

<VirtualHost *:443>
    ServerAlias * *.cryptopp.*

    <IfModule mod_headers.c>
        Header set Strict-Transport-Security "max-age=10886400; includeSubDomains"


# cat /etc/httpd/conf.modules.d/00-base.conf | grep headers
LoadModule headers_module modules/

# httpd -t -D DUMP_MODULES | grep header
 headers_module (shared)

error logs

# grep -IR "Strict-Transport-Security" /etc
/etc/httpd/conf/httpd.conf:        Header set Strict-Transport-Security "max-age=10886400; includeSubDomains" env=HTTPS  
# grep -IR "Strict-Transport-Security" /var/log/
# grep -IR "mod_headers" /var/log/


# find /var/www -name '.htaccess' -printf '%p\n' -exec cat {} \;
Deny from all
Deny from all
Deny from all
Deny from all
Deny from all
Deny from all
# Protect against bug 28235
<IfModule rewrite_module>
    RewriteEngine On
    RewriteCond %{QUERY_STRING} \.[^\\/:*?\x22<>|%]+(#|\?|$) [nocase]
    RewriteRule . - [forbidden]
# Protect against bug 28235
<IfModule rewrite_module>
    RewriteEngine On
    RewriteCond %{QUERY_STRING} \.[^\\/:*?\x22<>|%]+(#|\?|$) [nocase]
    RewriteRule . - [forbidden]
    # Fix for bug T64289
    Options +FollowSymLinks
Deny from all
Deny from all
RewriteEngine on
RewriteRule ^wiki/?(.*)$ /w/index.php?title=$1 [L,QSA]
<IfModule mod_deflate.c>
<FilesMatch "\.(js|css|html)$">
SetOutputFilter DEFLATE

curl transcript

$ /usr/local/bin/curl -Lv
* Rebuilt URL to:
*   Trying
* Connected to ( port 80 (#0)
> GET / HTTP/1.1
> Host:
> User-Agent: curl/7.51.0
> Accept: */*
< HTTP/1.1 302 Found
< Date: Sat, 05 Nov 2016 22:49:25 GMT
< Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips
< Location:
< Content-Length: 209
< Content-Type: text/html; charset=iso-8859-1
* Ignoring the response-body
* Curl_http_done: called premature == 0
* Connection #0 to host left intact
* Issue another request to this URL: ''
*   Trying
* Connected to ( port 443 (#1)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /opt/local/share/curl/curl-ca-bundle.crt
  CApath: none
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: OU=Domain Control Validated; OU=COMODO SSL Unified Communications
*  start date: Sep 17 00:00:00 2015 GMT
*  expire date: Sep 16 23:59:59 2018 GMT
*  subjectAltName: host "" matched cert's ""
*  issuer: C=GB; ST=Greater Manchester; L=Salford; O=COMODO CA Limited; CN=COMODO RSA Domain Validation Secure Server CA
*  SSL certificate verify ok.
> GET / HTTP/1.1
> Host:
> User-Agent: curl/7.51.0
> Accept: */*
< HTTP/1.1 200 OK
< Date: Sat, 05 Nov 2016 22:49:25 GMT
< Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips
< Last-Modified: Wed, 02 Nov 2016 01:27:08 GMT
< ETag: "8988-5404756e12afc"
< Accept-Ranges: bytes
< Content-Length: 35208
< Vary: Accept-Encoding
< Content-Type: text/html; charset=UTF-8
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
  <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  <title>Crypto++ Library 5.6.5 | Free C++ Class Library of Cryptographic Schemes</title>
  <meta name="description" content=
  "free C++ library for cryptography: includes ciphers, message authentication codes, one-way hash functions, public-key cryptosystems, key agreement schemes, and deflate compression">
  <link rel="stylesheet" type="text/css" href="cryptopp.css">

Firefox "The page isn’t redirecting properly" for a Wiki (all other Pages and UAs are OK) [closed]

Published 5 Nov 2016 by jww in Newest questions tagged mediawiki - Webmasters Stack Exchange.

We are having trouble with a website for a free and open source project. The website and its three components are as follows. Its running on a CenOS 7 VM hosted by someone else (PaaS).

The Apache version is Apache/2.4.6 (CentOS). The PHP version is 5.4.16 (cli) (built: Aug 11 2016 21:24:59). The Mediawiki version is 1.26.4.

The main site is OK and can be reached through both and in all browsers and user agents. The manual is OK and can be reached through both and in all browsers and user agents.

The wiki is OK under most Browsers and all tools. Safari is OK. Internet Explorer is OK. Chrome is untested because I don't use it. Command line tools like cURL and wget are OK. A trace using wget is below.

The wiki is a problem under Firefox. It cannot be reached at either and in Firefox. Firefox displays an error on both OS X 10.8 and Windows 8. Firefox is fully patched to the platform. The failure is:

enter image description here

We know the problem is due to a recent change to direct all traffic to HTTPS. The relevant addition to httd.conf is below. The change in our policy is due to Chrome's upcoming policy change regarding Security UX indicators.

I know these are crummy questions (none of us are webmasters or admins in our day job)... What is the problem? How do I troubleshoot it? How do I fix it?

wget trace

$ wget 
--2016-11-05 12:53:54--
Resolving (
Connecting to (||:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: [following]
--2016-11-05 12:53:54--
Resolving (
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: [following]
--2016-11-05 12:53:54--
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’

index.html              [ <=>                ]  20.04K  --.-KB/s    in 0.03s   

2016-11-05 12:53:54 (767 KB/s) - ‘index.html’ saved [20520]

Firefox access_log

# tail -16 /var/log/httpd/access_log
<removed irrelevant entries> - - [05/Nov/2016:13:00:52 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0" - - [05/Nov/2016:13:00:52 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0" - - [05/Nov/2016:13:00:53 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0" - - [05/Nov/2016:13:00:53 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0" - - [05/Nov/2016:13:00:53 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0" - - [05/Nov/2016:13:00:53 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0" - - [05/Nov/2016:13:00:53 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0" - - [05/Nov/2016:13:00:53 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0" - - [05/Nov/2016:13:00:53 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0" - - [05/Nov/2016:13:00:54 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0" - - [05/Nov/2016:13:00:54 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0"

httd.conf change

<VirtualHost *:80>
    ServerAlias * *.cryptopp.*

    <IfModule mod_rewrite.c>
        RewriteEngine On
        RewriteCond %{REQUEST_METHOD} ^TRACE
        RewriteRule .* - [F]
        RewriteCond %{REQUEST_METHOD} ^TRACK
        RewriteRule .* - [F]

        #redirect all port 80 traffic to 443
        RewriteCond %{SERVER_PORT} !^443$
        RewriteRule ^/?(.*)$1 [L,R]

<VirtualHost *:443>
    ServerAlias * *.cryptopp.*

Wikidata Map Animations

Published 4 Nov 2016 by addshore in Addshore.

Back in 2013 maps were generated almost daily to track the immediate usage of the then new coordinate location within the project. An animation was then created by Denny & Lydia showing the amazing growth which can be seen on commons here. Recently we found the original images used to make this animation starting in June 2013 and extending to September 2013, and to celebrate the fourth birthday of Wikidata we decided to make a few new animations.

The above animation contains images from 2013 (June to September) and then 2014 onwards.

This gap could be what resulted in the visible jump in brightness of the gif. This jump could also be explained by different render settings used to create the map, at some point we should go back and generate standardized images for every week / months that coordinates have existed on Wikidata.

The whole gif and the individual halves can all be found on commons under CC0:

The animations were generated directly from png files using the following command:

convert -delay 10 -loop 0 *.png output.gif

These animations use the “small” images generated in previous posts such as Wikidata Map October 2016.

A Simple Request: VLC.js

Published 1 Nov 2016 by Jason Scott in ASCII by Jason Scott.

Almost five years ago to today, I made a simple proposal to the world: Port MAME/MESS to Javascript.

That happened.

I mean, it cost a dozen people hundreds of hours of their lives…. and there were tears, rage, crisis, drama, and broken hearts and feelings… but it did happen, and the elation and the world we live in now is quite amazing, with instantaneous emulated programs in the browser. And it’s gotten boring for people who know about it, except when they haven’t heard about it until now.

By the way: work continues earnestly on what was called JSMESS and is now called The Emularity. We’re doing experiments with putting it in WebAssembly and refining a bunch of UI concerns and generally making it better, faster, cooler with each iteration. Get involved – come to #jsmess on EFNet or contact me with questions.

In celebration of the five years, I’d like to suggest a new project, one of several candidates I’ve weighed but which I think has the best combination of effort to absolute game-changer in the world.


Hey, come back!

It is my belief that a Javascript (later WebAssembly) port of VLC, the VideoLan Player, will fundamentally change our relationship to a mass of materials and files out there, ones which are played, viewed, or accessed. Just like we had a lot of software locked away in static formats that required extensive steps to even view or understand, so too do we have formats beyond the “usual” that are also frozen into a multi-step process. Making these instantaneously function in the browser, all browsers, would be a revolution.

A quick glance at the features list of VLC shows how many variant formats it handles, from audio and sound files through to encapsulations like DVD and VCDs. Files that now rest as hunks of ISOs and .ZIP files that could be turned into living, participatory parts of the online conversation. Also, formats like .MOD and .XM (trust me) would live again effectively.

Also, VLC has weathered years and years of existence, and the additional use case for it would help people contribute to it, much like there’s been some improvements in MAME/MESS over time as folks who normally didn’t dip in there added suggestions or feedback to make the project better in pretty obscure realms.

I firmly believe that this project, fundamentally, would change the relationship of audio/video to the web. 

I’ll write more about this in coming months, I’m sure, but if you’re interested, stop by #vlcjs on EFnet, or ping me on twitter at @textfiles, or write to me at with your thoughts and feedback.

See you.


Manually insert text into existing MediaWiki table row?

Published 30 Oct 2016 by jww in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I'm trying to update a page for a MediaWiki database running MW version 1.26.4. The MediaWiki is currenty suffering unexplained Internal Server Errors, so I am trying to perform an end-around by updating the database directly.

I logged into the database with the proper credentials. I dumped the table of interest and I see the row I want to update:

MariaDB [my_wiki]> select * from wikicryptopp_page;
| page_id | page_namespace | page_title                                                                | page_restrictions | page_is_redirect | page_is_new | page_random        | page_touched   | page_latest | page_len | page_content_model | page_links_updated | page_lang |
|       1 |              0 | Main_Page                                                                 |                   |                0 |           0 |     0.161024148737 | 20161011215919 |       13853 |     3571 | wikitext           | 20161011215919     | NULL      |
|    3720 |              0 | GNUmakefile                                                               |                   |                0 |           0 |     0.792691625226 | 20161030095525 |       13941 |    36528 | wikitext           | 20161030095525     | NULL      |

I know exactly where the insertion should occur, and I have the text I want to insert. The Page Title is GNUmakefile, and the Page ID is 3720.

The text is large at 36+ KB, and its sitting on the filesystem in a text file. How do I manually insert the text into existing table row?

How to log-in with more rights than Admin or Bureaucrat?

Published 30 Oct 2016 by jww in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I'm having a heck of a time with MediaWiki and an Internal Server Error. I'd like to log-in with more privileges than afforded by Admin and Bureaucrat in hopes of actually being able to save a page.

I am an admin on the VM that hosts the wiki. I have all the usernames and passwords at my disposal. I tried logging in with the MediaWiki user and password from LocalSettings.php but the log-in failed.

Is it possible to acquire more privileges than provided by Admin or Bureaucrat? If so, how do I log-in with more rights than Admin or Bureaucrat?

Character set 'utf-8' is not a compiled character set and is not specified in the '/usr/share/mysql/charsets/Index.xml' file

Published 28 Oct 2016 by jww in Newest questions tagged mediawiki - Webmasters Stack Exchange.

We are trying to upgrade our MediaWiki software. According to Manual:Upgrading -> UPGRADE -> Manual:Backing_up_a_wiki, we are supposed to backup the database with:

mysqldump -h hostname -u userid -p --default-character-set=whatever dbname > backup.sql

When we run the command with our parameters and --default-character-set=utf-8:

$ sudo mysqldump -h localhost -u XXX -p YYY --default-character-set=utf-8 ZZZ > 
mysqldump: Character set 'utf-8' is not a compiled character set and is not spec
ified in the '/usr/share/mysql/charsets/Index.xml' file

Checking Index.xml appears to show utf-8 is available. UTF-8 is specifically called out by Manual:$wgDBTableOptions.

$ cat /usr/share/mysql/charsets/Index.xml | grep -B 3 -i 'utf-8'
<charset name="utf8">
  <description>UTF-8 Unicode</description>

We tried both UTF-8 and utf-8 as specified by Manual:$wgDBTableOptions.

I have a couple of questions. First, can we omit --default-character-set since its not working as expected? Second, if we have to use --default-character-set, then what is used to specify UTF-8?

A third, related question is, can we forgo mysqldump all-together by taking the wiki and database offline and then making a physical copy of the database? I am happy to make a copy of the physical database for a restore; and I really don't care much for using tools that cause more trouble than they solve.

If the third item is a viable option, then what is the physical database file that needs to be copied?

Wikidata Map October 2016

Published 28 Oct 2016 by addshore in Addshore.

I has been another 5 months since my last post about the Wikidata maps, and again some areas of the world have lit up. Since my last post at least 9 noticeable areas have appeared with many new items containing coordinate locations. These include Afghanistan, Angola, Bosnia & Herzegovina, Burundi, Lebanon, Lithuania, Macedonia, South Sudan and Syria.

The difference map below was generated using Resemble.js. The pink areas show areas of difference between the two maps from April and October 2016.

Who caused the additions?

To work out what items exist in the areas that have a large amount of change the Wikidata query service can be used. I adapted a simple SPARQL query to show the items within a radius of the centre of each area of increase. For example Afghanistan used the following query:

 SELECT ?place ?placeLabel ?location ?instanceLabel
  wd:Q889 wdt:P625 ?loc . 
  SERVICE wikibase:around { 
      ?place wdt:P625 ?location . 
      bd:serviceParam wikibase:center ?loc . 
      bd:serviceParam wikibase:radius "100" . 
  OPTIONAL {    ?place wdt:P31 ?instance  }
  SERVICE wikibase:label { bd:serviceParam wikibase:language "en" }
  BIND(geof:distance(?loc, ?location) as ?dist) 
} ORDER BY ?dist

The query can be see running here and above. The items can then directly be clicked on, the history loaded.

The individual edits that added the coordinates can easily be spotted.

Of course this could also be done using a script following roughly the same process.

It looks like Reinheitsgebot (Magnus Manske) can be attributed to many of the areas of mass increase due to a bot run in April 2016. It looks like KrBot can be attributed to many of the coordinates in Lithuania due to a bot run in May 2016.

October 2016 maps

The October 2016 maps can be found on commons:

Labs project

I have given the ‘Wikidata Analysis’ tool a speedy reboot over the past weeks and generated many maps for may old dumps that are not currently on Wikimedia Commons.

The tool now contains a collection of date stamp directories which contain the data generated by the Java dump scanning tool as well ad the images that are then generated from that data using a Python script.

MediaWiki's VisualEditor component Parsoid not working after switching php7.0 to php5.7

Published 27 Oct 2016 by Dávid Kakaš in Newest questions tagged mediawiki - Ask Ubuntu.

I would like to ask you for your help with:

Because of forum CMS phpBB is not currently supporting >= php7.0 I had to switch to php5.6 on my Ubuntu16.04 LTS server. So installed php5.6 files from ppa:ondrej/php and by :

sudo a2dismod php7.0 ; sudo a2enmod php5.6 ; sudo service apache2 restart
sudo ln -sfn /usr/bin/php5.6 /etc/alternatives/php

... I switched to php5.6.

Unfortunately, this caused my MediaWiki's VisualEditor stop working. I made the MediaWiki plug-in talk to parsoid server before switching php and everything was working as expected. Also, when I switched back to php7.0 using:

sudo a2dismod php5.6 ; sudo a2enmod php7.0 ; sudo service apache2 restart sudo ln -sfn /usr/bin/php7.0 /etc/alternatives/php

... wiki is working fine once again, however posts with phpBB functionalities like phpBBCodes and tags are failing to be submitted. Well php7.0 version is unsupported so I cannot complain, so I am trying to make Parsoid work with php5.6 (which should be supported).

Error displayed when:

Other error (posible) simptoms:

[warning] [{MY_PARSOID_CONF_PREFIX}/Hlavná_stránka] non-200 response: 401 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>401 Unauthorized</title> </head><body> <h1>Unauthorized</h1> <p>This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.</p> <hr> <address>Apache/2.4.18 (Ubuntu) Server at Port 443</address> </body></html>

... however, now I dont get any warnings in the log! Even when performing "sudo service parsoid status" it shows "/bin/sh -c /usr/bin/nodejs /usr/lib/parsoid/src/bin/server.js -c /etc/mediawiki/parsoid/server.js -c /etc/mediawiki/parsoid/settings.js >> /var/log/parsoid/parsoid.log 2>&1" which as I hope means it is outputing error measseages to the log.

I tried:

Possible Cause:

What do you think? Any suggestion how to solve or further test this problem?

P.S. Sorry for badly formated code in question, but it somehow broke ... seems I am the problem after all :-D

Droplet Tagging: Organize Your Infrastructure

Published 25 Oct 2016 by DigitalOcean in DigitalOcean Blog.

At DigitalOcean, we are on a mission to make managing production applications simple. Today, we are officially announcing the addition of Droplet tags to make it even easier to work with large-scale production applications.

Last fall, we quietly launched tagging and management of resources via our public API. Since then, over 94,000 Droplets have been tagged including use cases like:

As developers ourselves, we know how important it is to stay organized when working on and managing applications. Tags are a simple and powerful way to do this.

How Do You Use Tags?

When we released tagging via the API, we received a lot of fantastic feedback. It was exciting to see our community embrace a feature to this extent, and it proved that we needed to add tags to our Cloud control panel too.

We've added tags to all Droplet-related views, like the main Droplets page, in order to make managing your Droplets and tags simpler from wherever you are - Cloud control panel, Metadata Service, and API.

Control panel

We also created a new tag-only view, which allows you to see all Droplets with a given tag. Here, you can see how our team groups our production Droplets by tag:

Control panel filtered by tag

For more detail on how to use tags via the control panel, check out our tagging tutorial on our Community Site.

What Can You Use Tags For?

Managing Resource Ownership

A simple tag like team:data or team:widget makes it easy to know exactly who is responsible for a given set of Droplets. For example, different teams in a company may share a single DigitalOcean Team, and can use tags to track their resource usage separately. Engineers on an on-call rotation, an ops-team, a finance team, or anyone simply debugging a problem can benefit from these kinds of tags as well.

Monitoring and Automation

Knowing the importance of a given Droplet to the healthy operation of a product is an essential part of ensuring the reliability of your system, and tagging your Droplets with env:production or env:dev can help facilitate this.

For example, if your alerting infrastructure is tag-aware, rules can be made less sensitive to increased load or memory usage on a staging or development server than on production servers. If your infrastructure management system is sufficiently mature, you may be able to self-heal by scaling your application servers automatically.

Similarly, with Prometheus' file-based service discovery and regular calls to the DigitalOcean API (e.g., by a cronjob), you can dynamically configure metrics based on tags. You can fine tune parameters like scrape interval, evaluation interval, and any external labels you want to apply — which may be tags themselves.

Logging and Data Retention Policies

Logging and metric data is invaluable, especially during outages, but storing that data can be costly on high-traffic systems. Tagging resources and including those tags in your structured logs can be used to dictate log retention policies. This can help optimize disk usage to ensure critical infrastructure has the most log retention while test servers get little or none. Systems such as RSyslog can apply rules based on JSON-structured logs in CEE format.

Deployments and Infrastructure Management

A common strategy for testing and rolling out deployments is to use blue/green deployments. Implementing a blue/green deployment becomes easy with tags; simply use two tags, blue and green, to track which Droplets are in which set, then use the API to trigger the promotion (by switching the traffic direction later, e.g. by updating a Floating IP, load balancer configuration, or DNS record).

Infrastructure management is an art in and of itself. Recently, our own Tommy Murphy contributed support for DigitalOcean's tags to HashiCorp's Terraform infrastructure automation platform. This has been used to build lightweight firewall management tooling (GitHub) to ensure that hosts with a given tag can pass traffic but will drop traffic from any other host.

What's Coming Up Next?

Being able to tag your Droplets is only the beginning. We know that Block Storage, Floating IPs, DNS records, and other resources are all critical parts of your production infrastructure too. In order to make your deployment, monitoring, and development infrastructure simpler to manage, we're working on letting you manage entire groups of resources via tags over the coming months.


Thank you to everyone who has used tags and provided feedback. We hope these improvements help make it a little easier for you to build and ship great things. Please keep the feedback coming. How do you use tagging to manage your infrastructure? We would love to hear from you!

Working the polls

Published 19 Oct 2016 by legoktm in The Lego Mirror.

After being generally frustrated by this election cycle and wanting to contribute to make it less so, I decided to sign up to work at the polls this year, and help facilitate the election. Yesterday, we had election officer training by the Santa Clara County Registrar of Voter's office. It was pretty fascinating to me given that I've only ever voted by mail, and haven't been inside a physical polling place in years. But the biggest takeaway I had, was that California goes to extraordinary lengths to ensure that everyone can vote. There's basically no situation in which someone who claims they are eligible to vote is denied being able to vote. Sure, they end up voting provisionally, but I think that is significantly better than turning them away and telling them they can't vote.

"wiki is currently unable to handle this request" after installing SimpleMathJax on MediaWiki

Published 19 Oct 2016 by hasanghaforian in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I need to show mathematical terms in mediawiki-1.26.2 so I tried to install SimpleMathJax on mediawiki. I followed the described in extension page:

I downloaded then extract, rename it to SimpleMathJax and move it to extensions directory of mediawiki. I added these lines to LocalSettings.php:

# End of automatically generated settings.
# Add more configuration options below.
require_once "$IP/extensions/SimpleMathJax/SimpleMathJax.php";
$wgSimpleMathJaxSize = 120;

But when I want to browse to the Wiki, I get this error:

wiki is currently unable to handle this request.

Also I tried to replace require_once "$IP/extensions/SimpleMathJax/SimpleMathJax.php"; line with wfLoadExtension( 'SimpleMathJax' ); but problem remains.

MediaWiki foreground not rendering tabs in content section

Published 8 Oct 2016 by Protocol96 in Newest questions tagged mediawiki - Server Fault.

We are having issues getting the foreground or foundation skins in MediaWiki to render any tabs in the content section of our pages. This site is a demo, hosted on GoDaddy, but we have also tried clean installs Fedora locally and Linode.

All the applicable CSS and JS seems to be loading correctly, and there are no obvious errors in the logs. The skin/theme does correctly render the navbar section at the top of the pages. Maybe we are doing something wrong in the syntax or there is another step to enabling the skin/theme we are missing?

Any help would be appreciated.

Google Assistant & Wikipedia

Published 6 Oct 2016 by addshore in Addshore.

googleassistant-wikipedia1The Google Assistant is essentially a chat bot that you can talk too within the new Allo chat app. The assistant is also baked into some new Google hardware, such as the pixel phones. During a quick test of the assistant, I noticed that if you ask it to “tell me an interesting fact” sometimes it will respond with facts from Wikipedia.

As can be seen in the image, when chatting to the bot you can ask for an interesting fact. The bot then responds and a collection of suggested tiles are placed at the bottom of the chat window. One of these tiles suggests looking at the source. Clicking this will prompt you to open in a browser or in the Wikipedia app.

Once open a quick scan of the article will reveal:

August is the month with highest birth rate in the United States.

Personalized Group Recommendations on Flickr

Published 30 Sep 2016 by Mehul Patel in

There are two primary paradigms for the discovery of digital content. First is the search paradigm, in which the user is actively looking for specific content using search terms and filters (e.g., Google web search, Flickr image search, Yelp restaurant search, etc.). Second is a passive approach, in which the user browses content presented to them (e.g., NYTimes news, Flickr Explore, and Twitter trending topics). Personalization benefits both approaches by providing relevant content that is tailored to users’ tastes (e.g., Google News, Netflix homepage, LinkedIn job search, etc.). We believe personalization can improve the user experience at Flickr by guiding both new as well as more experienced members as they explore photography. Today, we’re excited to bring you personalized group recommendations.

Flickr Groups are great for bringing people together around a common theme, be it a style of photography, camera, place, event, topic, or just some fun. Community members join for several reasons—to consume photos, to get feedback, to play games, to get more views, or to start a discussion about photos, cameras, life or the universe. We see value in connecting people with appropriate groups based on their interests. Hence, we decided to start the personalization journey by providing contextually relevant and personalized content that is tuned to each person’s unique taste.

Of course, in order to respect users’ privacy, group recommendations only consider public photos and public groups. Additionally, recommendations are private to the user. In other words, nobody else sees what is recommended to an individual.

In this post we describe how we are improving Flickr’s group recommendations. In particular, we describe how we are replacing a curated, non-personalized, static list of groups with a dynamic group recommendation engine that automatically generates new results based on user interactions to provide personalized recommendations unique to each person. The algorithms and backend systems we are building are broad and applicable to other scenarios, such as photo recommendations, contact recommendations, content discovery, etc.


Figure: Personalized group recommendations


One challenge of recommendations is determining a user’s interests. These interests could be user-specified, explicit preferences or could be inferred implicitly from their actions, supported by user feedback. For example:

Another challenge of recommendations is figuring out group characteristics. I.e.: what type of group is it? What interests does it serve? What brings Flickr members to this group? We can infer this by analyzing group members, photos posted to the group, discussions and amount of activity in the group.

Once we have figured out user preferences and group characteristics, recommendations essentially becomes a matchmaking process. At a high-level, we want to support 3 use cases:

Collaborative Filtering

One approach to recommender systems is presenting similar content in the current context of actions. For example, Amazon’s “Customers who bought this item also bought” or LinkedIn’s “People also viewed.” Item-based collaborative filtering can be used for computing similar items.


Figure: Collaborative filtering in action

By Moshanin (Own work) [CC BY-SA 3.0] from Wikipedia

Intuitively, two groups are similar if they have the same content or same set of users. We observed that users often post the same photo to multiple groups. So, to begin, we compute group similarity based on a photo’s presence in multiple groups.  

Consider the following sample matrix M(Gi -> Pj) constructed from group photo pools, where 1 means a corresponding group (Gi) contains an image, and empty (0) means a group does not contain the image.


From this, we can compute M.M’ (M’s transpose), which gives us the number of common photos between every pair of groups (Gi, Gj):


We use modified cosine similarity to compute a similarity score between every pair of groups:


To make this calculation robust, we only consider groups that have a minimum of X photos and keep only strong relationships (i.e., groups that have at least Y common photos). Finally, we use the similarity scores to come up with the top k-nearest neighbors for each group.

We also compute group similarity based on group membership —i.e., by defining group-user relationship (Gi -> Uj) matrix. It is interesting to note that the results obtained from this relationship are very different compared to (Gi, Pj) matrix. The group-photo relationship tends to capture groups that are similar by content (e.g.,“macro photography”). On the other hand, the group-user relationship gives us groups that the same users have joined but are possibly about very different topics, thus providing us with a diversity of results. We can extend this approach by computing group similarity using other features and relationships (e.g., autotags of photos to cluster groups by themes, geotags of photos to cluster groups by place, frequency of discussion to cluster groups by interaction model, etc.).

Using this we can easily come up with a list of similar groups (Use Case # 1). We can either merge the results obtained by different similarity relationships into a single result set, or keep them separate to power features like “Other groups similar to this group” and “People who joined this group also joined.”

We can also use the same data for recommending groups to users (Use Case # 2). We can look at all the groups that the user has already joined and recommend groups similar to those.

To come up with a list of relevant groups for a photo (Use Case # 3), we can compute photo similarity either by using a similar approach as above or by using Flickr computer vision models for finding photos similar to the query photo. A simple approach would then be to recommend groups that these similar photos belong to.


Due to the massive scale (millions of users x 100k groups) of data, we used Yahoo’s Hadoop Stack to implement the collaborative filtering algorithm. We exploited sparsity of entity-item relationship matrices to come up with a more efficient model of computation and used several optimizations for computational efficiency. We only need to compute the similarity model once every 7 days, since signals change slowly.


Figure: Computational architecture

(All logos and icons are trademarks of respective entities)


Similarity scores and top k-nearest neighbors for each group are published to Redis for quick lookups needed by the serving layer. Recommendations for each user are computed in real-time when the user visits the groups page. Implementation of the serving layer takes care of a few aspects that are important from usability and performance point-of-view:

Cold Start

The drawback to collaborative filtering is that it cannot offer recommendations to new users who do not have any associations. For these users, we plan to recommend groups from an algorithmically computed list of top/trending groups alongside manual curation. As users interact with the system by joining groups, the recommendations become more personalized.

Measuring Effectiveness

We use qualitative feedback from user studies and alpha group testing to understand user expectation and to guide initial feature design. However, for continued algorithmic improvements, we need an objective quantitative metric. Recommendation results by their very nature are subjective, so measuring effectiveness is tricky. The usual approach taken is to roll out to a random population of users and measure the outcome of interest for the test group as compared to the control group (ref: A/B testing).

We plan to employ this technique and measure user interaction and engagement to keep improving the recommendation algorithms. Additionally, we plan to measure explicit signals such as when users click “Not interested.” This feedback will also be used to fine-tune future recommendations for users.


Figure: Measuring user engagement

Future Directions

While we’re seeing good initial results, we’d like to continue improving the algorithms to provide better results to the Flickr community. Potential future directions can be classified broadly into 3 buckets: algorithmic improvements, new product use cases, and new recommendation applications.

If you’d like to help, we’re hiring. Check out our jobs page and get in touch.

Product Engineering: Mehul Patel, Chenfan (Frank) Sun,  Chinmay Kini

Updates 1.2.2 and 1.1.6 published

Published 27 Sep 2016 by Roundcube Webmail Dev Team in Roundcube Webmail Project News.

We just published updates to both stable versions 1.2 and 1.1 delivering important bug fixes and and again more improvements of the Enigma plugin introduced in version 1.2. Version 1.1.6 comes with cherry-picked fixes from the more recent version and improvements in contacts searching as well as a few localization fixes.

See the full changelog in the wiki and the selection for 1.1.6 on the release page.

Both versions are considered stable and we recommend to update all productive installations of Roundcube with either of these versions. Download them from GitHub via

As usual, don’t forget to backup your data before updating!

Ready, Set, Hacktoberfest!

Published 26 Sep 2016 by DigitalOcean in DigitalOcean Blog.

October is a special time for open source enthusiasts, open source beginners, and for us at DigitalOcean: It marks the start of Hacktoberfest, which enters its third year this Saturday, October 1!

What's Hacktoberfest?

Hacktoberfest—in partnership with GitHub—is a month-long celebration of open source software. Maintainers are invited to guide would-be contributors towards issues that will help move the project forward, and contributors get the opportunity to give back to both projects they like and ones they've just discovered. No contribution is too small—bug fixes and documentation updates are valid ways of participating.

Rules and Prizes

To participate, first sign up on the Hacktoberfest site. And if you open up four pull requests between October 1 and October 31, you'll win a free, limited edition Hacktoberfest T-shirt. (Pull requests do not have to be merged and accepted; as long as they've been opened between the very start of October 1 and the very end of October 31, they count towards a free T-shirt.)

Connect with other Hacktoberfest participants (Hacktobefestants?) by using the hashtag, #Hacktoberfest, on your social media platform of choice.


A photo posted by Coston (@costonperkins) on

What's Different This Year

We wanted to make it easier for contributors to locate projects that needed help, and we also wanted project maintainers to have the ability to highlight issues that were ready to be worked on. To that end, we've introduced project labeling, allowing project maintainers to add a "Hacktoberfest" label to any issues that contributors could start working on. Browse participating projects on GitHub.

We've also put together a helpful list of resources for both project maintainers and contributors on the Hacktoberfest site.

Ready to get started with Hacktoberfest? Sign up to participate today.


The Festival Floppies

Published 22 Sep 2016 by Jason Scott in ASCII by Jason Scott.

In 2009, Josh Miller was walking through the Timonium Hamboree and Computer Festival in Baltimore, Maryland. Among the booths of equipment, sales, and demonstrations, he found a vendor was selling an old collection of 3.5″ floppy disks for DOS and Windows. He bought it, and kept it.

A few years later, he asked me if I wanted them, and I said sure, and he mailed them to me. They fell into the everlasting Project Pile, and waited for my focus and attention.

They looked like this:


I was particularly interested in the floppies that appeared to be someone’s compilation of DOS and Windows programs in the most straightforward form possible – custom laser-printed directories on the labels, and no obvious theme as to why this shareware existed on them. They looked like this, separated out:


There were other floppies in the collection, as well:


They’d sat around for a few years while I worked on other things, but the time finally came this week to spend some effort to extract data.

There’s debates on how to do this that are both boring and infuriating, and I’ve ended friendships over them, so let me just say that I used a USB 3.5″ floppy drive (still available for cheap on Amazon; please take advantage of that) and a program called WinImage that will pull out a disk image in the form of a .ima file from the floppy drive. Yes, I could do a flux imaging of these disks, but sorry, that’s incredibly insane overkill. These disks contain files put on there by a person and we want those files, along with the accurate creation dates and the filenames and contents. WinImage does it.

Sometimes, the floppies have some errors and require trying over to get the data off them. Sometimes it takes a LOT of tries. If after a mass of tries I am unable to do a full disk scan into a disk image, I try just mounting it as A: in Windows and pulling the files off – they sometimes are just fine but other parts of the disk are dead. I make this a .ZIP file instead of a .IMA file. This is not preferred, but the data gets off in some form.

Some of them (just a handful) were not even up for this – they’re sitting in a small plastic bin and I’ll try some other methods in the future. The ratio of Imaged-ZIPed-Dead were very good, like 40-3-3.

I dumped most of the imaged files (along with the ZIPs) into this item.

This is a useful item if you, yourself, want to download about 100 disk image files and “do stuff” with them. My estimation is that all of you can be transported from the first floor to the penthouse of a skyscraper with 4 elevator trips. Maybe 3. But there you go, folks. They’re dropped there and waiting for you. Internet Archive even has a link that means “give me everything at once“. It’s actually not that big at all, of course – about 260 megabytes, less than half of a standard CD-ROM.

I could do this all day. It’s really easy. It’s also something most people could do, and I would hope that people sitting on top of 3.5” floppies from DOS or Windows machines would be up for paying the money for that cheap USB drive and something like WinImage and keep making disk images of these, labeling them as best they can.

I think we can do better, though.

The Archive is running the Emularity, which includes a way to run EM-DOSBOX, which can not only play DOS programs but even play Windows 3.11 programs as well.

Therefore, it’s potentially possible for many of these programs, especially ones particularly suited as stand-alone “applications”, to be turned into in-your-browser experiences to try them out. As long as you’re willing to go through them and get them prepped for emulation.

Which I did.


The Festival Floppies collection is over 500 programs pulled from these floppies that were imaged earlier this week. The only thing they have in common was that they were sitting in a box on a vendor table in Baltimore in 2009, and I thought in a glance they might run and possibly be worth trying out. After I thought this (using a script to present them for consideration), the script did all the work of extracting the files off the original floppy images, putting the programs into an Internet Archive item, and then running a “screen shotgun” I devised with a lot of help a few years back that plays the emulations, takes the “good shots” and makes them part of a slideshow so you can get a rough idea of what you’re looking at.


You either like the DOS/Windows aesthetic, or you do not. I can’t really argue with you over whatever direction you go – it’s both ugly and brilliant, simple and complex, dated and futuristic. A lot of it depended on the authors and where their sensibilities lay. I will say that once things started moving to Windows, a bunch of things took on a somewhat bland sameness due to the “system calls” for setting up a window, making it clickable, and so on. Sometimes a brave and hearty soul would jazz things up, but they got rarer indeed. On the other hand, we didn’t have 1,000 hobbyist and professional programs re-inventing the wheel, spokes, horse, buggy, stick shift and gumball machine each time, either.


Just browsing over the images, you probably can see cases where someone put real work into the whole endeavor – if they seem to be nicely arranged words, or have a particular flair with the graphics, you might be able to figure which ones have the actual programming flow and be useful as well. Maybe not a direct indicator, but certainly a flag. It depends on how much you want to crate-dig through these things.

Let’s keep going.

Using a “word cloud” script that showed up as part of an open source package, I rewrote it into something I call a “DOS Cloud”. It goes through these archives of shareware, finds all the textfiles in the .ZIP that came along for the ride (think README.TXT, READ.ME, FILEID.DIZ and so on) and then runs to see what the most frequent one and two word phrases are. This ends up being super informative, or not informative at all, but it’s automatic, and I like automatic. Some examples:

Mad Painterpaint, mad, painter, truck, joystick, drive, collision, press, cloud, recieve, mad painter, dos prompt

Screamer screamer, code, key, screen, program, press, command, memory, installed,activate, code key, memory resident, correct code, key combination, desired code

D4W20timberline, version, game, sinking, destroyer, gaming, smarter, software,popularity, timberline software, windows version, smarter computer, online help, high score

Certainly in the last case, those words are much more informative than the name D4W20 (which actually stands for “Destroyer for Windows Version 2.0”), and so the machine won the day. I’ve called this “bored intern” level before and I’d say it’s still true – the intern may be bored, but they never stop doing the process, either. I’m sure there’s some nascent class discussion here, but I’ll say that I don’t entirely think this is work for human beings anymore. It’s just more and more algorithms at this point. Reviews and contextual summaries not discernible from analysis of graphics and text are human work.

For now.


These programs! There are a lot of them, and a good percentage solve problems we don’t have anymore or use entire other methods to deal with the information. Single-use programs to deal with Y2K issues, view process tasks better, configure your modem, add a DOS interface, or track a pregnancy. Utilities to put the stardate in the task bar, applications around coloring letters, and so it goes. I think the screenshots help make decisions, if you’re one of the people idly browsing these sets and have no personal connection to DOS or Windows 3.1 as a lived experience.

I and others will no doubt write more and more complicated methods for extracting or providing metadata for these items, and work I’m doing in other realms goes along with this nicely. At some point, the entries for each program will have a complication and depth that rivals most anything written about the subjects at the time, when they were the state of the art in computing experience. I know that time is coming, and it will be near-automatic (or heavily machine-assisted) and it will allow these legions of nearly-lost programs to live again as easily as a few mouse clicks.

But then what?


But Then What is rapidly becoming the greatest percentage of my consideration and thought, far beyond the relatively tiny hurdles we now face in terms of emulation and presentation. It’s just math now with a lot of what’s left (making things look/work better on phones, speeding up the browser interactions, adding support for disk swapping or printer output or other aspects of what made a computer experience lasting to its original users). Math, while difficult, has a way of outing its problems over time. Energy yields results. Processing yields processing.

No, I want to know what’s going to happen beyond this situation, when the phones and browsers can play old everything pretty accurately, enough that you’d “get it” to any reasonable degree playing around with it.

Where do we go from there? What’s going to happen now? This is where I’m kind of floating these days, and there are ridiculously scant answers. It becomes very “journey of the mind” as you shake the trees and only nuts come out.

To be sure, there’s a sliver of interest in what could be called “old games” or “retrogaming” or “remixes/reissues” and so on. It’s pretty much only games, it’s pretty much roughly 100 titles, and it’s stuff that has seeped enough into pop culture or whose parent companies still make enough bank that a profit motive serves to ensure the “IP” will continue to thrive, in some way.

The Gold Fucking Standard is Nintendo, who have successfully moved into such a radical space of “protecting their IP” that they’ve really successfully started moving into wrecking some of the past – people who make “fan remixes” might be up for debate as to whether they should do something with old Nintendo stuff, but laying out threats for people recording how they experienced the games, and for any recording of the games for any purpose… and sending legal threats at anyone and everyone even slightly referencing their old stuff, as a core function.. well, I’m just saying perhaps ol’ Nintendo isn’t doing itself any favors but on the other hand they can apparently be the most history-distorting dicks in this space quadrant and the new games still have people buy them in boatloads. So let’s just set aside the Gold Fucking Standard for a bit when discussing this situation. Nobody even comes close.

There’s other companies sort of taking this hard-line approach: “Atari”, Sega, Capcom, Blizzard… but again, these are game companies markedly defending specific games that in many cases they end up making money on. In some situations, it’s only one or two games they care about and I’m not entirely convinced they even remember they made some of the others. They certainly don’t issue gameplay video takedowns and on the whole, historic overview of the companies thrives in the world.

But what a small keyhole of software history these games are! There’s entire other experiences related to software that are both available, and perhaps even of interest to someone who never saw this stuff the first time around. But that’s kind of an educated guess on my part. I could be entirely wrong on this. I’d like to find out!

Pursuing this line of thought has sent me hurtling into What are even musuems and what are even public spaces and all sorts of more general questions that I have extracted various answers for and which it turns out are kind of turmoil-y. It also has informed me that nobody kind of completely knows but holy shit do people without managerial authority have ideas about it. Reeling it over to the online experience of this offline debated environment just solves some problems (10,000 people look at something with the same closeness and all the time in the world to regard it) and adds others (roving packs of shitty consultant companies doing rough searches on a pocket list of “protected materials” and then sending out form letters towards anything that even roughly matches it, and calling it a ($800) day).

Luckily, I happen to work for an institution that is big on experiments and giving me a laughably long leash, and so the experiment of instant online emulated computer experience lives in a real way and can allow millions of people (it’s been millions, believe it or not) to instantly experience those digital historical items every second of every day.

So even though I don’t have the answers, at all, I am happy that the unanswered portions of the Big Questions haven’t stopped people from deriving a little joy, a little wonder, a little connection to this realm of human creation.

That’s not bad.


DNS inside PHP-FPM chroot jail on OpenBSD 6.0 running nginx 1.10.1, PHP 7.0.8, MariaDB 10.0.25 and MediaWiki 1.27.1

Published 18 Sep 2016 by Till Kraemer in Newest questions tagged mediawiki - Server Fault.

I'm running nginx 1.10.1 on OpenBSD 6.0 with the packages php-7.0.8p0, php-curl-7.0.8p0, php-fastcgi-7.0.8p0, php-gd-7.0.8p0, php-mcrypt-7.0.8p0, php-mysqli-7.0.8p0, mariadb-client-10.0.25v1 and mariadb-server-10.0.25p0v1.

I have several MediaWiki 1.27.1 installations, one pool for images and several language wikis accessing the pool. Each installation has its own virtual subdomain configured in nginx.

php70_fpm runs chrooted, /etc/php-fpm.conf looks like this:

chroot = /path/to/chroot/jail

listen = /path/to/chroot/jail/run/php-fpm.sock

/etc/nginx/nginx/sites-available/ looks like this:

fastcgi_pass   unix:run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

/etc/my.cnf looks like this:

port            = 1234
socket          = /path/to/mysql.sock
bind-address    =

When I try to fetch image descriptions from on, I'm getting a "Couldn't resolve host" error.

As soon as I run php_fpm without chroot, file descriptions are fetched from the pool without any problem.

I don't want to copy stuff from /etc into /path/to/chroot/jail so what can I do? Are there some PHP 7 modules I could use? Do I have to play around with unbound?

Any help is more than welcome!

Thanks and cheers,


Simple MediaWiki backup

Published 18 Sep 2016 by Brian S in Newest questions tagged mediawiki - Server Fault.

I am currently on contract with a small (<250 accounts) municipal water supply company. One of the things I'm doing is rewriting their ten-years-out-of-date procedures manual, and after some discussion with the company's president and with the treasurer, I settled on a localhost MediaWiki install.

The problem I'm currently having is with a backup of the wiki. (The monitor of the laptop currently hosting the wiki began to fail this week, which moved data backup to the front of my priorities.) I can certainly back it up, and I know how to restore it from backup. However, this contracting job is not a permanent placement, and eventually the office manager(s) would be responsible for it. However, they are not especially tech savvy, and the MediaWiki backup instructions involve options like command-line tools, which are not things they are particularly interested in learning.

Is there any way I can simplify the backup & restore process (in particular, the database backup; I am confident the managers can handle files if need be)?

The computer running the localhost wiki is a laptop with Windows 10, running XAMPP (Apache 2.4.17, MySQL 5.0.11, PHP 5.6.21)

(Repost from SO after realizing this question is off-topic there.)

The RevisionSlider

Published 18 Sep 2016 by addshore in Addshore.

The RevisionSlider is an extension for MediaWiki that has just been deployed on all Wikipedias and other Wikimedia websites as a beta feature. The extension was developed by Wikimedia Germany as part of their focus on technical wishes of the German speaking Wikimedia community. This post will look at the RevisionSliders design, development and use so far.

What is the RevisionSlider

Once enabled, the slider appears on the diff comparison page of MediaWiki, where it aims to enable users to more easily find the revision of a page that introduced or removed some text as well as making the navigation of the history of the page easier. Each revision is represented by a vertical bar extending upward from the centre of the slider for revisions that added content and downward from the slider for those that removed content. Two coloured pointers are used to indicate the revisions that are currently being compared, the colour coding matches the colour of the revision changes in the diff view. Each pointer can be moved by dragging to a new revision bar or by clicking on the bar, at this point the diff will be reloaded using ajax for the user to review. For pages with many revisions arrows are enabled at the ends of the slider to move back and forward through revisions. Extra information about the revisions represented by bars is shown in a tooltip on hover.

Deployment & Usage

The RevisionSlider was deployed in stages, first to test sites in mid July 2016, then to the German Wikipedia and a few other sites that have been proactive in requesting the feature in late July 2016, and finally to all Wikimedia sites on 6 September 2016. In the 5 days following the deployment to all sites the number of users using the feature increased from 1739 to 3721 (over double) according to the Grafana dashboard This means the beta feature now has more users than the “Flow on user talk page” feature and will soon overtake the number of users with ORES enabled unless we see a sudden slow down

The wish

The wish that resulted in the creation of the RevisionSlider was wish #15 from the 2015 German Community Technical Wishlist and the Phabricator task can be found at The wish actually reads (roughly translated) When viewing the diff a section of the version history, especially the editing comments show be show. Lots of discussion follows to establish the actual issue that the community was having with the diff page, and the consensus was it was generally very hard to move from one diff to another. The standard process within MediaWiki requires the user to start from the history page to select a diff. The diff then allows moving forward or backward revision by revision but big jumps are not possible without first navigating back to the history page.

The first test version of the slider was inspired by the user script called RevisionJumper. This script provided a drop down menu in the diff view that provided various options to jump to a version of the page considerably before or after the current shown version. This can be seen in the German example below.

DerHexer (, „Gadget-revisionjumper11 de“,

DerHexer (, „Gadget-revisionjumper11 de“,

The WMF Communit Tech team worked on a prototype during autumn 2015 which was then picked up by WMDE at the Wikimedia Jerusalem hackathon in 2016 and pushed to fruition.

DannyH (WMF) (, „Revslider screenshot“,

DannyH (WMF) (, „Revslider screenshot“,

Further development


Why the Apple II ProDOS 2.4 Release is the OS News of the Year

Published 15 Sep 2016 by Jason Scott in ASCII by Jason Scott.


In September of 2016, a talented programmer released his own cooked update to a major company’s legacy operating system, purely because it needed to be done. A raft of new features, wrap-in programs, and bugfixes were included in this release, which I stress was done as a hobby project.

The project is understatement itself, simply called Prodos 2.4. It updates ProDOS, the last version of which, 2.0.3, was released in 1993.

You can download it, or boot it in an emulator on the webpage, here.

As an update unto itself, this item is a wonder – compatibility has been repaired for the entire Apple II line, from the first Apple II through to the Apple IIgs, as well as cases of various versions of 6502 CPUs (like the 65C02) or cases where newer cards have been installed in the Apple IIs for USB-connected/emulated drives. Important utilities related to disk transfer, disk inspection, and program selection have joined the image. The footprint is smaller, and it runs faster than its predecessor (a wonder in any case of OS upgrades).

The entire list of improvements, additions and fixes is on the Internet Archive page I put up.


The reason I call this the most important operating system update of the year is multi-fold.

First, the pure unique experience of a 23-year-gap between upgrades means that you can see a rare example of what happens when a computer environment just sits tight for decades, with many eyes on it and many notes about how the experience can be improved, followed by someone driven enough to go through methodically and implement all those requests. The inclusion of the utilities on the disk means we also have the benefit of all the after-market improvements in functionality that the continuing users of the environment needed, all time-tested, and all wrapped in without disturbing the size of the operating system programs itself. It’s like a gold-star hall of fame of Apple II utilities packed into the OS they were inspired by.

This choreographed waltz of new and old is unique in itself.

Next is that this is an operating system upgrade free of commercial and marketing constraints and drives. Compared with, say, an iOS upgrade that trumpets the addition of a search function or blares out a proud announcement that they broke maps because Google kissed another boy at recess. Or Windows 10, the 1968 Democratic Convention Riot of Operating Systems, which was designed from the ground up to be compatible with a variety of mobile/tablet products that are on the way out, and which were shoved down the throats of current users with a cajoling, insulting methodology with misleading opt-out routes and freakier and freakier fake-countdowns.

The current mainstream OS environment is, frankly, horrifying, and to see a pure note, a trumpet of clear-minded attention to efficiency, functionality and improvement, stands in testament to the fact that it is still possible to achieve this, albeit a smaller, slower-moving target. Either way, it’s an inspiration.


Last of all, this upgrade is a valentine not just to the community who makes use of this platform, but to the ideas of hacker improvement calling back decades before 1993. The amount of people this upgrade benefits is relatively small in the world – the number of folks still using Apple IIs is tiny enough that nearly everybody doing so either knows each other, or knows someone who knows everyone else. It is not a route to fame, or a resume point to get snapped up by a start-up, or a game of one-upsmanship shoddily slapped together to prove a point or drop a “beta” onto the end as a fig leaf against what could best be called a lab experiment gone off in the fridge. It is done for the sake of what it is – a tool that has been polished and made anew, so the near-extinct audience for it works to the best of their ability with a machine that, itself, is thought of as the last mass-marketed computer designed by a single individual.

That’s a very special day indeed, and I doubt the remainder of 2016 will top it, any more than I think the first 9 months have.

Thanks to John Brooks for the inspiration this release provides. 

Support RAM-Intensive Workloads with High Memory Droplets

Published 12 Sep 2016 by DigitalOcean in DigitalOcean Blog.

At DigitalOcean, we aim to make it simple and intuitive for developers to build and scale their infrastructure, from an application running on a single Droplet to a highly distributed service running across thousands of Droplets. As applications grow and become more specialized, so too do the configurations needed to run them effectively. Recently, with the launch of Block Storage, we made it easy to scale storage independently from compute at a lower price point than before. Today, we're doing something similar for RAM with the release of High Memory Droplet plans.

Standard Droplets offer a great balance of RAM, CPU, and storage for most general use-cases. Our new High Memory Droplets are optimized for RAM-intensive use-cases such as high-performance databases, in-memory caches like Redis or Memcache, or search indexes.

High Memory Droplet plans start with 16GB and scale up to 224GB of RAM with smaller ratios of local storage and CPU relative to Standard Plans. They are priced 25% lower than our Standard Plans on a per-gigabyte of RAM basis. Find all the details in the chart below and on our pricing page.

Pricing chart

We're actively looking at ways to support more specialized workloads and provide a platform that enables developers to tailor their environment to their applications' needs. We'd love to hear how we can better support your use-case. Let us know in the comments or over on our UserVoice.

GitHub service to deploy via git-mediawiki

Published 10 Sep 2016 by user48147 in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I've been helping write documentation and manage the website for SuperTuxKart, an open-source racing game. The website uses MediaWiki, but we've discussed things and decided after our switch away from SourceForge hosting not to allow free account creation. However, this left us in a dilemma as to how to allow contributions to the wiki while avoiding the spam accounts that plagued the previous one.

We decided that allowing pull requests to submit content to GitHub, then deploy it to MediaWiki would work well. After some research and experimenting, I created a semi-working shell script that uses git-mediawiki to

  1. Clone the wiki
  2. Push the wiki to GitHub
  3. Fetch and merge changes from the wiki
  4. Fetch and merge changes from GitHub (though the wiki has priority in case of a merge conflict)
  5. Push to the wiki and to GitHub.

What I am looking for is a GitHub webhook service to run this script regularly (e.g. every 15 minutes) and whenever there is a commit to GitHub. It also needs some method of write access to the git repository without using my own credentials. I can't just have a script git pull updates to the server because MediaWiki pages can't be read from a normal git repository; they must be in a database.

The content of my script is below:

# Auto sync script for the SuperTuxKart wiki

# Set up repo if not already done
if ! [ -d "" ]
    echo "Setting up repository..."

    git clone --origin wiki mediawiki::
    cd ""
    git remote add github
    git push github master

cd ""

git pull --rebase wiki
git pull --rebase -s recursive -X ours github master

git push wiki master
git push github master

Who’s Going to be the Hip Hop Hero

Published 8 Sep 2016 by Jason Scott in ASCII by Jason Scott.

People often ask me if there’s a way they can help. I think I have something.

So, the Internet Archive has had a wild hit on its hand with the Hip Hop Mixtapes collection, which I’ve been culling from multiple sources and then shoving into the Archive’s drives through a series of increasingly complicated scripts. When I run my set of scripts, they do a good job of yanking the latest and greatest from a selection of sources, doing all the cleanup work, verifying the new mixtapes aren’t already in the collection, and then uploading them. From there, the Archive’s processes do the work, and then we have ourselves the latest tapes available to the world.

Since I see some of these tapes get thousands of listens within hours of being added, I know this is something people want. So, it’s a success all around.


With success, of course, comes the two flipside factors: My own interest in seeing the collection improved and expanded, and the complaints from people who know about this subject finding shortcomings in every little thing.

There is a grand complaint that this collection currently focuses on mixtapes from 2000 onwards (and really, 2005 onwards). Guilty. That’s what’s easiest to find. Let’s set that one aside for a moment, as I’ve got several endeavors to improve that.

What I need help with is that there are a mass of mixtapes that quickly fell off the radar in terms of being easily downloadable and I need someone to spend time grabbing them for the collection.

While impressive, the 8,000 tapes up on the archive are actually the ones that were able to be grabbed by scripts, without any hangups, like the tapes falling out of favor or the sites they were offering going down. If you use the global list I have, the total amount of tapes could be as high as 20,000.

Again, it’s a shame that a lot of pre-2000 mixtapes haven’t yet fallen into my lap, but it’s really a shame that mixtapes that existed, were uploaded to the internet, and were readily available just a couple years ago, have faded down into obscurity. I’d like someone (or a coordinated group of someones) help grab those disparate and at-risk mixtapes to get into the collection.

I have information on all these missing tapes – the song titles, the artist information, and even information on mp3 size and what was in the original package. I’ve gone out there and tried to do this work, and I can do it, but it’s not a good use of my time – I have a lot of things I have to do and dedicating my efforts in this particular direction means a lot of other items will suffer.

So I’m reaching out to you. Hit me up at and help me build a set of people who are grabbing this body of work before it falls into darkness.


php 5.4 on CentOS7

Published 7 Sep 2016 by user374636 in Newest questions tagged mediawiki - Server Fault.

I am trying to install MediaWiki 1.27 on CentOS7.2. CentOS7.2 comes with php 5.4. However, at least 5.5.9 is required for MediaWiki 1.27.

I have installed and enabled rh-php56 from SCL repo which installed php5.6 in parallel with CentOS stock php5.4.

Unfortunately, MediaWiki still gives me an error that I am running php5.4. Is there a way I can point MediaWiki to start using the newer php5.6 instead? Or am I better off replacing the stock php5.4 with php5.6 from Remi's repository?

Thank you!

Mediawiki LDAP setup issues

Published 7 Sep 2016 by justin in Newest questions tagged mediawiki - Server Fault.

I have Mediawiki setup on a fedora machine and am attempting to get it working with our AD credentials. It is successfully connecting to our AD server and you can log into mediawiki fine with them. However now I am trying to restrict it so that only our IT department users can logon. I cant seem to get the setup correct though, the relevant section to my LocalSettings file is below:

$wgAuth = new LdapAuthenticationPlugin();
$wgLDAPDomainNames = array("MYDOMAIN");
$wgLDAPServerNames = array("MYDOMAIN" => "DOMAINIP");
$wgLDAPSearchStrings = array("MYDOMAIN" => "MYDOMAIN\\USER-NAME);
$wgLDAPEncryptionType = array("MYDOMAIN" => "ssl");

$wgLDAPBaseDNs = array("MYDOMAIN" => "dc=MYDOMAIN","dc=com");
$wgLDAPSearchAttributes = array("MYDOMAIN"=>"sAMAccountName");
$wgLDAPRetrievePrefs = array("MYDOMAIN" =>true);
$wgLDAPPreferences = array("MYDOMAIN" =>array('email' => 'mail','realname'=>'displayname'));
$wgLDAPDebug =3;
$wgLDAPExceptionDetails = true;

$wgLDAPRequiredGroups = array("MYDOMAIN" => array("OU=Users,OU=IT,OU=Admin,DC=MYDOMAIN,DC=com"));

If I remove that last line about required groups i can log in fine. Our setup in AD for folders is as follows from top to bottom MYDOMAIN-> Admin -> IT ->Users ->John Doe. But like i said if i implement that last line no one can log in to our mediawiki.

Introducing Hatch (Beta)

Published 6 Sep 2016 by DigitalOcean in DigitalOcean Blog.

We're excited to launch Hatch (currently in beta), an online incubator program designed to help and support startups. Infrastructure can be one of the largest expenses facing these companies as they begin to scale. With Hatch, startups can receive access to both DigitalOcean credit and a range of other resources like 1-on-1 technical consultations.

Our goal with Hatch is to give back to the startup ecosystem and provide support to founders around the world so they can focus on building their businesses and not worry about their infrastructure. Having come through the Techstars program, we know just how valuable this support network can be.

The Hatch program includes a range of perks for startups to get started, including 12 months of DigitalOcean credit up to $100,000 (actual amount varies by partner organization). The program also offers various support services such as 1-on-1 technical consultations, access to mentorship opportunities, solutions engineering, and priority support. We're looking to go beyond just offering infrastructure credits. We want to provide founders with an educational and networking experience that will add tremendous value to their startup for the long term.

Is my startup eligible?

Starting now, we are piloting the program to a small group of startups. While in beta, we'll be working to refine the offering and eligibility criteria for future bootstrapped and funded startups who apply.

As of today (September 7, 2016), here are the Hatch eligibility requirements for startups:

You can apply to Hatch by visiting and completing the online application. Want to learn more? Read the FAQ.

Want to become a partner?

We're currently adding over a hundred accelerators, investors, and partners to introduce startups around the world to the Hatch community. If you're interested in becoming a portfolio partner of Hatch, you can apply here.

Is your startup eligible and do you plan on applying? We'd love to hear from you! Reach out to us on Twitter or use the #hatchyouridea hashtag to tell us what your startup is all about.

Using Vault as a Certificate Authority for Kubernetes

Published 5 Sep 2016 by DigitalOcean in DigitalOcean Blog.

The Delivery team at DigitalOcean is tasked to make shipping internal services quick and easy. In December of 2015, we set out to design and implement a platform built on top of Kubernetes. We wanted to follow the best practices for securing our cluster from the start, which included enabling mutual TLS authentication between all etcd and Kubernetes components.

However, this is easier said than done. DigitalOcean currently has 12 datacenters in 3 continents. We needed to deploy at least one Kubernetes cluster to each datacenter, but setting up the certificates for even a single Kubernetes cluster is a significant undertaking, not to mention dealing with certificate renewal and revocation for every datacenter.

So, before we started expanding the number of clusters, we set out to automate all certificate management using Hashicorp's Vault. In this post, we'll go over the details of how we designed and implemented our certificate authority (CA).


We found it helpful to look at all of the communication paths before designing the structure of our certificate authority.

communication paths diagram

All Kubernetes operations flow through the kube-apiserver and persist in the etcd datastore. etcd nodes should only accept communication from their peers and the API server. The kubelets or other clients must not be able to communicate with etcd directly. Otherwise, the kube-apiserver's access controls could be circumvented. We also need to ensure that consumers of the Kubernetes API are given an identity (a client certificate) to authenticate to kube-apiserver.

With that information, we decided to create 2 certificate authorities per cluster. The first would be used to issue etcd related certificates (given to each etcd node and the kube-apiserver). The second certificate authority would be for Kubernetes, issuing the kube-apiserver and the other Kubernetes components their certificates. The diagram above shows the communications that use the etcd CA in dashed lines and the Kubernetes CA in solid lines.

With the design finalized, we could move on to implementation. First, we created the CAs and configured the roles to issue certificates. We then configured vault policies to control access to CA roles and created authentication tokens with the necessary policies. Finally, we used the tokens to pull the certificates for each service.

Creating the CAs

We wrote a script that bootstraps the CAs in Vault required for each new Kubernetes cluster. This script mounts new pki backends to cluster-unique paths and generates a 10 year root certificate for each pki backend.

vault mount -path $CLUSTER_ID/pki/$COMPONENT pki
vault mount-tune -max-lease-ttl=87600h $CLUSTER_ID/pki/etcd
vault write $CLUSTER_ID/pki/$COMPONENT/root/generate/internal \
common_name=$CLUSTER_ID/pki/$COMPONENT ttl=87600h

In Kubernetes, it is possible to use the Common Name (CN) field of client certificates as their user name. We leveraged this by creating different roles for each set of CN certificate requests:

vault write $CLUSTER_ID/pki/etcd/roles/member \
    allow_any_name=true \

The role above, under the cluster's etcd CA, can create a 30 day cert for any CN. The role below, under the Kubernetes CA, can only create a certificate with the CN of "kubelet".

vault write $CLUSTER_ID/pki/k8s/roles/kubelet \
    allowed_domains="kubelet" \
    allow_bare_domains=true \
    allow_subdomains=false \

We can create roles that are limited to individual CNs, such as "kube-proxy" or "kube-scheduler", for each component that we want to communicate with the kube-apiserver.

Because we configure our kube-apiserver in a high availability configuration, separate from the kube-controller-manager, we also generated a shared secret for those components to use with the --service-account-private-key-file flag and write it to the generic secrets backend:

openssl genrsa 4096 > token-key
vault write secret/$CLUSTER_ID/k8s/token key=@token-key
rm token-key

In addition to these roles, we created individual policies for each component of the cluster which are used to restrict which paths individual vault tokens can access. Here, we created a policy for etcd members that will only have access to the path to create an etcd member certificate.

cat <<EOT | vault policy-write $CLUSTER_ID/pki/etcd/member -
path "$CLUSTER_ID/pki/etcd/issue/member" {
  policy = "write"

This kube-apiserver policy only has access to the path to create a kube-apiserver certificate and to read the service account private key generated above.

cat <<EOT | vault policy-write $CLUSTER_ID/pki/k8s/kube-apiserver -
path "$CLUSTER_ID/pki/k8s/issue/kube-apiserver" {
  policy = "write"
path "secret/$CLUSTER_ID/k8s/token" {
  policy = "read"

Now that we have the structure of CAs and policies created in Vault, we need to configure each component to fetch and renew its own certificates.

Getting Certificates

We provided each machine with a Vault token that can be renewed indefinitely. This token is only granted the policies that it requires. We set up the token role in Vault with:

vault write auth/token/roles/k8s-$CLUSTER_ID \
period="720h" \
orphan=true \

Then, we built tokens from that token role with the necessary policies for the given node. As an example, the etcd nodes were provisioned with a token generated from this command:

vault token-create \
  -policy="$CLUSTER_ID/pki/etcd/member" \

All that is left now is to configure each service with the appropriate certificates.

Configuring the Services

We chose to use consul-template to configure services since it will take care of renewing the Vault token, fetching new certificates, and notifying the services to restart when new certificates are available. Our etcd node consul-template configuration is:

  "template": {
    "source": "/opt/consul-template/templates/cert.template",
    "destination": "/opt/certs/etcd.serial",
    "command": "/usr/sbin/service etcd restart"
  "vault": {
    "address": "VAULT_ADDRESS",
    "token": "VAULT_TOKEN",
    "renew": true

Because consul-template will only write one file per template and we needed to split our certificate into its components (certificate, private key, and issuing certificate), we wrote a custom plugin that takes in the data, a file path, and an file owner. Our certificate template for etcd nodes uses this plugin:

{{ with secret "$CLUSTER_ID/pki/data/issue/member" "common_name=$FQDN"}}
{{ .Data.serial_number }}
{{ .Data.certificate | plugin "certdump" "/opt/certs/etcd-cert.pem" "etcd"}}
{{ .Data.private_key | plugin "certdump" "/opt/certs/etcd-key.pem" "etcd"}}
{{ .Data.issuing_ca | plugin "certdump" "/opt/certs/etcd-ca.pem" "etcd"}}
{{ end }}

The etcd process was then configured with the following options so that both peers and clients must present a certificate issued from Vault in order to communicate:


The kube-apiserver has one certificate template for communicating with etcd and one for the Kubernetes components, and the process is configured with the appropriate flags:


The first three etcd flags allow the kube-apiserver to communicate with etcd with a client certificate; the two TLS flags allow it to host the API over a TLS connection; the last flag allows it to verify clients by ensuring that their certificates were signed by the same CA that issued the kube-apiserver certificate.


Each component of the architecture is issued a unique certificate and the entire process is fully automated. Additionally, we have an audit log of all certificates issued, and frequently exercise certificate expiration and rotation.

We did have to put in some time up front to learn Vault, discover the appropriate command line arguments, and integrate the solution discussed here into our existing configuration management system. However, by using Vault as a certificate authority, we drastically reduced the effort required to set up and maintain many Kubernetes clusters.

Add Exif data back to Facebook images

Published 4 Sep 2016 by addshore in Addshore.

I start this post not by talking about Facebook, but about Google Photos. Google now offers unlimited ‘high resolution’ images within its service where high resolution is defined as 16MP for an image and 1080p for video. Of course there is some compression here that some may argue against but photos and video can also be uploaded at original quality (exactly as captured) and the cost of space for these files is very reasonable. So, It looks like I have found a new home for my piles of photos and videos that I want to be able to look back at in 20 years!

Prior to Google Photos developments I stored a reasonable number of images on Facebook, and now I want to also add them all to Google Photos, but that is not as easy as I first thought. All of your Facebook data can easily be downloaded which includes all of your images and videos, but not exactly as they were when you uploaded them, as they have all of the exif data such as location and time stripped. This data is actually available in a html file which is served with each Facebook album. So, I wrote a terribly hacky script in PHP for Windows to extract that data and re add it to the files so that they can be bulk uploaded to Google Photos and take advantage of the timeline and location features.

The code can be found below (it looks horrible but works…)


// README: Set the path to the extracted facebook dump photos directory here
$directory = 'C:/Users/username/Downloads/facebook-username/photos';

// README: Download this and set the path here (of the renamed exe)
$tool = 'C:\Users\username\exiftool.exe';

//     Do not touch anything below here...    // =]

echo "Starting\n";

$albums = glob( $directory . '/*', GLOB_ONLYDIR );

foreach ( $albums as $album ) {
    echo "Running for album $album\n";
    $indexFile = $album . '/index.htm';
    $dom = DOMDocument::loadHTMLFile( $indexFile );
    $finder = new DomXPath( $dom );
    $blockNodes = $finder->query( "//*[contains(concat(' ', @class, ' '), ' block ')]" );
    foreach ( $blockNodes as $blockNode ) {
        $imageNode = $blockNode->firstChild;
        $imgSrc = $imageNode->getAttribute( 'src' );
        $imgSrcParts = explode( '/', $imgSrc );
        $imgSrc = array_pop( $imgSrcParts );
        $imgLocation = $album . '/' . $imgSrc;

        echo "Running for file $imgLocation\n";

        $details = array();
        $metaDiv = $blockNode->lastChild;
        $details['textContent'] = $metaDiv->firstChild->textContent;
        $metaTable = $metaDiv->childNodes->item( 1 );
        foreach ( $metaTable->childNodes as $rowNode ) {
            $details[$rowNode->firstChild->textContent] = $rowNode->lastChild->textContent;

        $toChange = '';

        $toChange[] = '"-EXIF:ModifyDate=' . date_format( new DateTime(), 'Y:m:d G:i:s' ) . '"';

        if ( array_key_exists( 'Taken', $details ) ) {
            $toChange[] = '"-EXIF:DateTimeOriginal=' .
                date_format( new DateTime( "@" . $details['Taken'] ), 'Y:m:d G:i:s' ) .
        } else {
        if ( array_key_exists( 'Camera Make', $details ) ) {
            $toChange[] = '"-EXIF:Make=' . $details['Camera Make'] . '"';
        if ( array_key_exists( 'Camera Model', $details ) ) {
            $toChange[] = '"-EXIF:Model=' . $details['Camera Model'] . '"';
// Doing this will cause odd rotations.... (as facebook has already rotated the image)...
//      if ( array_key_exists( 'Orientation', $details ) ) {
//          $toChange[] = '"-EXIF:Orientation=' . $details['Orientation'] . '"';
//      }
        if ( array_key_exists( 'Latitude', $details ) && array_key_exists( 'Longitude', $details ) ) {
            $toChange[] = '"-EXIF:GPSLatitude=' . $details['Latitude'] . '"';
            $toChange[] = '"-EXIF:GPSLongitude=' . $details['Longitude'] . '"';
            // Tool will look at the sign used for NSEW!
            $toChange[] = '"-EXIF:GPSLatitudeRef=' . $details['Latitude'] . '"';
            $toChange[] = '"-EXIF:GPSLongitudeRef=' . $details['Longitude'] . '"';
            $toChange[] = '"-EXIF:GPSAltitude=' . '0' . '"';

        exec( $tool . ' ' . implode( ' ', $toChange ) . ' ' . $imgLocation );

echo "Done!\n";

I would rewrite it but I have no need to (as it works). But when searching online for some code to do just this I came up short and thus thought I would post the rough idea and process for others to find, and perhaps improve on.

Karateka: The Alpha and the Beta

Published 31 Aug 2016 by Jason Scott in ASCII by Jason Scott.

As I enter into a new phase of doing things and how I do things, let’s start with something pleasant.


As part of the work with pulling Prince of Persia source code from a collection of disks a number of years back (the lion’s share of the work done by Tony Diaz), Jordan Mechner handed me an additional pile of floppies.

Many of these floppies have been imaged and preserved, but a set of them had not, mostly due to coming up with the time and “doing it right” and all the other accomplishment-blocking attributes of fractal self-analysis. That issue is now being fixed, and you are encouraged to enjoy the immediate result.

As Karateka (1985) became a huge title for Brøderbund Software, they wanted the program to run on as many platforms as possible. However, the code was not written to be portable; Brøderbund instead contracted with a number of teams to make Karateka versions on hardware other than the Apple II. The work by these teams, Jordan Mechner told me, often suffered from being ground-up rewrites of the original game idea – they would simply make it look like the game, without really spending time duplicating the internal timing or logic that Jordan had put into the original. Some came out fine on the other end; others did not.

Jordan’s opinion on the IBM port of Karateka was not positive. From his Making-of-Karateka journal (thanks to  for finding this entry):


You can now see how it looked and played when he made these comments. I just pulled out multiple copies of Karateka from a variety of internally distributed floppies Jordan had in the set he gave me. I chose two representative versions and now you can play them both on the Internet Archive.

screenshot_02The first version is what would now be called the “Alpha”, but which in this collection is just called “Version 1986-01-30”, and was duplicated on February 4, 1986. It is a version which was obviously done as some sort of milestone – debugging information is everywhere, and it starts with a prompt of which levels to try, before starting the game.

Without going too much into the specific technical limitations of PC Compatibles of the time, I’ll instead just offer the following screenshot, which will connect you to an instantly playable-in-browser version of the Karateka Alpha. This has never been released before.


You can see all sorts of weird artifacts and performance issues with the Alpha – glitches in graphics and performance, and of course the ever-present debugging messages and system information. The contractors doing the work, the Connelly Group, have no presence on the internet in any obvious web searches – they may have just been internal employees, or a name given to some folks just to keep distance between “games” work and “real” work; maybe that information will come out.

The floppy this came on, as shown above, had all sorts of markings for Brøderbund to indicate what the build number was, who had the floppies (inventory control), and that the disk had no protection routines on it, which makes my life in the present day notably easier. Besides the playable version of the information in a ZIP file, there is an IMG file of the entire 360k floppy layout, usable by a number of emulators or viewers.

The next floppy in terms of time stamp is literally called BETA, from March 3, 1986. With over a month of effort into the project, a bunch of bugs have been fixed, screens added, and naturally all the debugging information has been stripped away. I’m assuming this was for playtesters to check out, or to be used by marketing/sales to begin the process of selling it in the PC world. Here is a link to an instantly playable-in-browser version of the Karateka Beta. This has also never been released before.


For the less button-mashy of us, here are the keys and a “handing it over to you at the user group meeting” version of how Karateka works.

You’re a dude going from the left to the right. If you go too far left, you will fall off the cliff and die. To the right are a bunch of enemies. You can either move or fight. If you are not in a fighting stance, you will die instantly, but in a fighting stance, you will move really slowly.

You use the arrow keys (left and right) to move. Press SPACE to flip between “moving” and “fighting” modes. The Q, A, and Z keys are high, middle and low punches. The W, S and X keys are high middle and low kicks. The triangles on the bottom are life meters. Whoever runs out of triangles first in a fight will die.

It’s worthwhile to note that the games, being an Alpha and a Beta, are extremely rough. I wouldn’t suggest making them your first game of Karateka ever – that’s where you should play the original Apple II version

Karateka is a wealth of beginnings for understanding entertainment software – besides being a huge hit for Brøderbund, it’s an aesthetic masterwork, containing cinematic cutscenes and a clever pulling of cultural sources to combine into a multi-layered experience on a rather simple platform. From this groundwork, Jordan would go on to make Prince of Persia years later, and bring these efforts to another level entirely. He also endeavored to make the Prince of Persia code as portable and documented as possible, so different platforms would have similar experiences in terms of playing.

In 2012, Jordan released a new remake/reboot of Karateka, which is also cross-platform (the platforms now being PC, iOS, PS4, Xbox and so on) and is available at KARATEKA.COM. It is a very enjoyable remake. There are also ports of “Karateka Classic” for a world where your controls have to be onscreen, like this one.

In a larger sense, it’d be a wonderful world where a lot of old software was available for study, criticism, discussion and so on. We have scads of it, of course, but there’s so much more to track down. It’s been a driving effort of mine this year, and it continues.

But for now, let’s enjoy a really, really unpleasant but historically important version of Karateka.

HTTPS is live on

Published 26 Aug 2016 by Pierrick Le Gall in The Blog.

Some of you were waiting for it, others don’t know yet what it’s all about!

HTTPS is the way to encrypt communications between your web browser and the website you visit. Your Piwigo for instance. It is mainly useful for the log in form and administration pages. Your password is no longer sent in “plain text” through internet nodes, like your internet provider or servers.

SSL certificate in action for HTTPS

SSL certificate in action for HTTPS

How to use it?

For now, Piwigo doesn’t automatically use HTTPS. You have to switch manually if you want HTTPS. Just add “s” after “http” in the address bar of your web browser.

In the next few days or weeks, Piwigo will automatically switch to HTTPS on the login form and the pages you open afterwards.

Why wasn’t HTTPS already available? was born 6 years ago and HTTPS already existed at that time. Here are the 3 main reasons for the wait:

  1. Piwigo is a photo management software, not a bank. Such a level of security level was not considered as a priority, compared to other features.
  2. the Piwigo application and its related project, without considering hosting, have needed some code changes to work flawlessly with HTTPS. Today we’re proud to say Piwigo works great with multiple addresses, with or without HTTPS. Piwigo automatically uses the appropriate web address. If you have worked with other web application, you certainly know how much Piwigo makes your life easy when dealing with URLs.
  3. the multiple servers infrastructure on, with multiple sub-domains * have made the whole encryption system a bit complex. Without going into details, and for those of you interested, we use a wildcard SSL certificate from Gandi. Nginx reverse proxy on frontend server runs on it. So does Nginx on backend servers. All communication between servers is encrypted when you use HTTPS.

What about custom domain names?

11.5% of accounts are using a custom domain name. They have more than a * web address.

Each SSL certificate, which is the “key” for encryption, is dedicated to a domain name. In this case, our SSL certificate is only “trusted” for *

You can try to use your domain name with HTTPS, but your web browser will display a huge security warning. If you say to your web browser “it’s OK, I understand the risk”, then you can use our certificate combined to your domain name.

The obvious solution is to use Let’s Encrypt, recently released. It will let us generate custom certificates, perfectly compliant with web browser requirements. We will work on it.

Kenny Austin and Friends at the Odd Fellow

Published 21 Aug 2016 by Dave Robertson in Dave Robertson.


Basic iPhone security for regular people

Published 18 Aug 2016 by Carlos Fenollosa in Carlos Fenollosa — Blog.

Real life requires a balance between convenience and security. You might not be a high-profile person, but we all have personal information on our phones which can give us a headache if it falls into the wrong hands.

Here are some options you can enable to harden your iPhone in the case of theft, a targeted attack or just a curious nephew who's messing with your phone.

Even if you don't enable them all, it's always nice to know that these features exist to protect your personal information. This guide is specific for iPhones, but I suppose that most of them can be directly applied to other phones.

Password-protect your phone

Your iPhone must always have a password. Otherwise, anybody with physical access to your phone will get access to all your information: calendar, mail, pictures or *gasp* browser history.

Passwords are inconvenient. However, even a simple 4-digit code will stop casual attackers, though it is not secure against a resourceful attacker

☑ Use a password on your phone: Settings > Touch ID & Passcode

Furthermore, enable the 10-attempt limit, so that people can't brute-force your password.

☑ Erase data after 10 attempts: Settings > Touch ID & Passcode > Erase data (ON)

If your phone has Touch ID, enable it, and use a very long and complicated password to unlock your phone. You will only need to input it on boot and for a few options. It is reasonably secure and has few drawbacks for most users. Unless you have specific reasons not to do it, just go and enable Touch ID.

☑ Enable Touch ID: Settings > Touch ID & Passcode

Regarding password input, and especially if your phone doesn't have Touch ID, using a numeric keyboard is much faster than the QWERTY one. Here's a trick that will help you choose a secure numeric password which is easy to remember.

Think of a word and convert it to numbers as if you were dialing them on a phone, i.e. ABC -> 2, DEF -> 3, ..., WYZ -> 9. For example, if your password is "PASSWORD", the numeric code would be 72779673.

The iPhone will automatically detect that the password contains only numbers and will present a digital keyboard on the lock screen instead of a QWERTY one, making it super easy to remember and type while still keeping a high level of security.

☑ If you must use a numeric password, use a long one: Settings > Touch ID & Passcode

Harden your iPhone when locked

A locked phone can still leak private data. Accessing Siri, the calendar or messages from the lock screen is handy, but depending on your personal case, can give too much information to a thief or attacker.

Siri is a great source of data leaks, and I recommend that you disable it when your phone is locked. It will essentially squeal your personal info, your contacts, tasks or events. A thief can easily know everything about you or harass your family if they get a hand on a phone with Siri enabled on the lock screen.

This setting does not disable Siri completely; it just requires the phone to be unlocked for Siri to work.

☑ Disable Siri when phone is locked: Settings > Touch ID & Passcode > Siri

If you have confidential data on your calendar, you may also want to disable the "today" view which usually includes your calendar, reminders, etc.

☑ Disable Today view: Settings > Touch ID & Passcode > Today

Take a look at the other options there. You may want to turn off the notifications view, or the option to reply with a message. An attacker may spoof your identity by answering messages while the phone is locked, for example, taking advantage from an SMS from "Mom" and tricking her into asking for her maiden name, pet names, etc., which are usually answers to secret questions to recover your password.

☑ Disallow message replies when the phone is locked: Settings > Touch ID & Passcode > Reply with Message

Having your medical information on the emergency screen has pros and cons. Since I don't have any dangerous conditions, I disable it. Your case may be different.

Someone with your phone can use Medical ID to get your name and picture, which may be googled for identity theft or sending you phishing emails. Your name can also be searched for public records or DNS whois information, which may disclose your home phone, address, date of birth, ID number and family members.

In summary, make it sure that somebody who finds your locked phone cannot discover who you are or interact as if they were you.

☑ Disable Medical ID: Health > Medical ID > Edit > Show When Locked

Some people think that letting anyone find out the owner of the phone is a good idea, since an honest person who finds your lost phone can easily contact you. However, you can always display a personalized message on your lock screen if you report your phone missing on iCloud.

☑ Enable "Find my phone": Settings > iCloud > Find my iPhone > Find My iPhone

Make sure that your phone will send its location just before it runs out of battery

☑ Enable "Find my phone": Settings > iCloud > Find my iPhone > Send Last Location

To finish this section, if you don't have the habit of manually locking your phone after you use it, or before placing it in your pocket, configure your iPhone to do it automatically:

☑ Enable phone locking: Settings > General > Auto-Lock

Harden the hardware

Your phone is now secure and won't sing like a canary when it gets into the wrong hands.

However, your SIM card may. SIMs can contain personal information, like names, phones or addresses, so they must be secured, too.

Enable the SIM lock so that, on boot, it will ask for a 4-digit code besides your phone password. It may sound annoying, but it isn't. It's just an extra step that you only need to perform once every many days, when your phone restarts.

Otherwise, a thief can stick the SIM in another phone and access that information and discover your phone number. With it, you may be googled, or they may attempt phishing attacks weeks later.

Beware that this strategy doesn't allow the phone to ping home after it has been shut down and turned on.

☑ Enable SIM PIN: Settings > Phone > SIM PIN

Enable iCloud. When your phone is associated with an iCloud account, it is impossible for another person to use it, dropping its resale value to almost zero. I've had some friends get their phones back after a casual thief tried to sell them unsuccessfully thanks to the iCloud lock and finally decided to do the good thing and return it.

☑ Enable iCloud: Settings > iCloud

If you have the means, try to upgrade to an iPhone 5S or higher. These phones contain a hardware element called Secure Enclave which encrypts your personal information in a way that can't even be cracked by the FBI. If your phone gets stolen by a professional, they won't be able to solder the flash memory into another device and recover your data.

☑ Upgrade to a phone with a Secure Enclave (iPhone 5S or higher)

Harden your online accounts

In reality, your online data is much more at risk than your physical phone. Botnets constantly try to find vulnerabilities in services and steal user passwords.

The first thing you must do right now is to install a password manager. Your iPhone has one built into the system, which is good enough to generate unique password and auto-fill them when needed.

If you don't like Apple's Keychain, I recommend LastPass and 1Password.

Why do you need a password manager? The main reason is to avoid having a single password for all services. The popular trick of having a weak password for most sites and another strong password for important sites is a dangerous idea.

Your goal is to have a different password for each site/service, so that if it gets attacked or you inadvertently leak it to a phishing attack, it is no big deal and doesn't affect all your accounts.

Just have a different one for each service and let the phone remember all of them. I don't know my passwords: Gmail, Facebook, Twitter, my browser remembers them for me.

☑ Use a password manager: Settings > iCloud > Keychain > iCloud Keychain

There is another system which complements passwords, called "Two-Factor Authentication", or 2FA. You have probably used it in online banking; they send you an SMS with a confirmation code that you have to enter somewhere.

If your password gets stolen, 2FA is a fantastic barrier against an attacker. Without your phone, they can't access your data, even if they have all your passwords.

☑ Use 2FA for your online accounts: manual for different sites

2FA makes it critical to disable SMS previews, because if a thief steals your phone and already has some of your passwords, he can use your locked phone to read 2FA SMS.

If you use iMessage heavily, this may be cumbersome, so decide for yourself.

☑ Disable SMS previews on locked phone: Settings > Notifications > Messages > Show Previews

Make it easy to recover your data

If the worst happens, and you lose your phone, get it stolen or drop it on the Venice canals, plan ahead so that the only loss is the money for a new phone. You don't want to lose your pictures, passwords, phone numbers, events...

Fortunately, iPhones have a phenomenal backup system which can store your phone data in the cloud or your Mac. I have a Mac, but I recommend the iCloud backup nonetheless.

Apple only offers 5 GB of storage in iCloud, which is poor, but fortunately, the pricing tiers are fair. For one or two bucks a month, depending on your usage, you can buy the cheapest and most important digital insurance to keep all your data and pictures safe.

iCloud backup can automatically set up a new phone and make it behave exactly like your old phone.

If you own a Mac, once you pay for iCloud storage, you can enable the "iCloud Photo Library" on Settings > iCloud > Photos > iCloud Photo Library for transparent syncing of all your pictures between your phone and your computer.

☑ Enable iCloud backup: Settings > iCloud > Backup > iCloud Backup

If you don't want the iCloud backup, at least add a free iCloud account or any other "sync" account like Google's, and use it to store your contacts, calendars, notes and Keychain.

☑ Enable iCloud: Settings > iCloud

Bonus: disable your phone when showing pictures

Afraid of handing your phone over to show somebody a picture? People have a tendency to swipe around to see other images, which may be a bad idea in some cases.

To save them from seeing things that can't be unseen, you can use a trick with the Guided Access feature to lock all input to the phone, yet still show whatever is on the screen.

☑ Use Guided Access to lock pictures on screen: Read this manual

This is not a thorough guide

As the title mentions, this is an essential blueprint for iPhone users who are not a serious target for digital theft. High-profile people need to take many more steps to secure their data. Still, they all implement these options too.

The usual scenario for a thief who steals your phone at a bar is as follows: they will turn it off or put it in airplane mode and try to unlock it. Once they see that it's locked with iCloud, they can either try to sell it for parts, return it or discard it.

Muggers don't want your data. However, it doesn't hurt to implement some security measures.

In worse scenarios, there are criminal companies specialized in buying stolen phones at a very low price and perform massive simple attacks to unsuspecting users to trick them into unlocking the phone or giving up personal data.

You don't need the same security as Obama or Snowden. Nonetheless, knowing how your phone leaks personal information and the possible attack vectors is important in defending yourself from prying eyes.

You have your whole life on your phone. In the case of an unfortunate theft, make it so the only loss is the cost of a new one.

Tags: security

Comments? Tweet  

Faster and More Accessible: The New

Published 16 Aug 2016 by DigitalOcean in DigitalOcean Blog.

It's here! The new launched last week, and we're so excited to share it with you.

We unified the site with our updated branding, but more importantly, we focused on improving the site's accessibility, organization, and performance. This means that you'll now have faster load times, less data burden, and a more consistent experience.

This rebuild is a nod to the values at the core of our company: we want to build fast, reliable products that anyone can use. So how did we make our site twice as fast and WCAG AA compliant? Read on:



One of the biggest concerns we had for our website redesign was making it accessible for users with low vision, people who use screen readers, and users who navigate via keyboard. Our primary focus was to be WCAG 2.0 AA compliant in terms of color contrast and to use accurately semantic HTML. This alone took care of most of the accessibility concerns we faced.

We also made sure to include text with any descriptive icons and images. Where we couldn't use native HTML or SVG elements, we used ARIA roles and attributes, especially focusing on our forms and interactive elements. The design team did explorations based on the various ways people may perceive color and put our components through a variety of tests to make sure these were also accounted for.

control panel color palette

We keep track of our progress on an internally-hosted application called pa11y, and when we uploaded our new site to the staging server initially, seeing the drop in errors and warnings made all of the audits worth it:

pa11y dashboard

A Unified System

The old CSS had thousands of rules, declarations, and unique colors. The un-gzipped file size came out to a whopping 306 kB.

For the redesign, we implemented a new design system called Float based on reusable components and utility classes to simplify and streamline our styles. With the Float framework, which we hope to open source soon, we were able to get the CSS file size down to almost a quarter of its original size: only 80kB!

We also dramatically reduced the complexity of our CSS and unified our design. We now have:

This framework allowed us to have a reference to existing code contained in a map that we referenced instead of creating new variable units. This is how we got reduced the size of our media queries by 89%. We also used utility classes (such as u-mb--large, which translates to "utility, margin-bottom, large") to unify our margin and padding sizes, which reduced the number of unique spacing resets previously sent down to users by 75%.

Not only is the CSS more unified throughout the site, both visually and variably, it is also much more performant as a result, saving users both time and data.

Front-end Performance

The largest pain point in terms of load time on the web in general is easily media assets. According to the HTTP archive, as of July 15 of 2016, the average web page is 2409 kB. Images make up about 63% of this, at an average of 1549 kB. On the new, we've kept this in mind, and had a higher goal for our site assets: less than 1000 kB with a very fast first load time.

We use SVG images for most of our media and icons throughout the site, which are generally much smaller than .jpg or .png formats due to their nature; SVGs are instructions for painting images rather than raster full images themselves. This also means that the images can scale and shrink with no loss of quality in their designs across various devices.

We've also built an icon sprite system using <symbol> and <use> to access these icons. This way, they can be shared in a single resource download for the user throughout the site. Like our scripts, we minify these sprites to eliminate additional white space, as well as minify all of our media assets automatically through our gulp-based build process.

There was one asset, however, that rang in at 600 kB on the old the animated gif on the homepage. Gifs are huge file formats, but can be very convenient. To minify this asset as much as possible, we manually edited it in Photoshop to reduce the color range to necessary colors and manipulated the frame count by hand. This saved 200 kB from the already-automatically-optimized gif alone without reducing its physical size, getting our site down to that goal of less than 1000 kB.

site comparison summary


There is always more work to be done in terms of improved performance and better accessibility, but we're proud of the improvements we've made so far and we'd love to hear what you think of the new!

Delete non-existant pages from MediaWiki that are listed in Special:allPages

Published 12 Aug 2016 by DeathCamel57 in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I'm trying to delete two pages:

These two pages are in the Special:AllPages list even though they return 404 not found status.

How can I delete them?

PHP does not handle "bigger" http requests correctly

Published 8 Aug 2016 by user6681109 in Newest questions tagged mediawiki - Server Fault.

After an OS update, "bigger" HTTP requests are no longer handled correctly by the web server/PHP/MediaWiki. Wiki article content is truncated after about 6K characters and MediaWiki reports a loss of session.

Symptoms: I first recognized the error with my formerly working installation of MediaWiki (PHP). When I edit an article and its size grows bigger than approx. 6k characters, the article text is truncated and the MediaWiki rejects to save the new text but reports a lost session error. Smaller articles are not affected.

Question: Is this possibly a bug in PHP? Should I file a bug report? Or am I doing something wrong? Is something misconfigured?

Context: At home, I recently updated my raspbian LAMP server from wheezy to jessie. It all worked well before.

  1. Operating system: Raspbian jessie (formerly wheezi) on a Raspberry Pi.
  2. Apache 2.4.
  3. phpinfo() shows no indication of suhosin, which is sometimes reported to cause problems with larger http requests. Also, other PHP parameters that are sometimes mentioned as relevant on the web are unsuspicious: PHP Version 5.6.24-0+deb8u1. max_input_time=60, max_execution_time=30, post_max_size=8M

What I tried so far:

  1. Other PHP program: To investigate further, I uploaded files through a simple PHP file upload script. Similar problem; file upload does not work. (For your reference, the code of the upload script was taken from here: The script uses simple form data, no Ajax, no JSON, ...)
  2. Larger file causes split: Moreover, larger http file upload requests (using files of several hundred KB) are seemingly split into two requests. The apache access log file shows (remember this is actually only a single request from the browser):
    • ... - - [05/Aug/2016:10:52:38 +0200] "POST /simpleupload.php HTTP/1.1" 200 85689 "https://.../simpleupload.php" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0"
    • ... - - [05/Aug/2016:10:52:38 +0200] "\xb4W\xcd\xff" 400 557 "-" "-" -
  3. Other browsers: The behavior can be replicated with different browsers: Firefox on Linux, Firefox 38 on Windows, and elinks browser on same machine.
  4. Eliminate network problems: I used elinks to access the webserver on localhost. Same problems in MediaWiki and the PHP file upload script.
  5. Increased Log level: Increasing the Apache LogLevel to debug does not bring up any new information during request handling.
  6. Error does not occur with Perl: The problem does not occur with a different file upload script written in Perl. File upload works properly. So, it does not seem to be a problem with OS, Apache, Browser, ...

Remarks: This is my attempt to rephrase my locked/on-hold question, which I cannot edit anymore.

Can preloaded text for edit pages use templates that change depending on page creator's wishes?

Published 7 Aug 2016 by user294584 in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I'm making a collaborative fiction writing site with MediaWiki that will host stories by different authors. Some stories will allow any kind of editing, others just minor changes, others just typo fixes, others no changes at all except after discussion, etc.

I found the way to change MediaWiki:copyrightwarning2, and put I generalized message there, but I'd really like a way for authors to customize a page that gets pulled in, perhaps by a template, into preload text that appears at the top of the edit page.

If it's just for all pages they've authored, that would be fine, but ideally it could be on a per-story basis.

Is there a way to implement such a thing?

Plugin repository pimped up

Published 4 Aug 2016 by Roundcube Webmail Dev Team in Roundcube Webmail Project News.

The Roundcube Plugin Repository which currently hosts over 100 plugins for Roundcube Webmail as recently been freshen up with new versions of Composer, Packagist and Solr and is now running with PHP7 on a brand new server kindly sponsored by XS4All. The new site also has Grade A SSL support to guarantee authenticity.

With this upgrade, we also moved the platform out of the Beta state as it has proven to be a great extension to the Roundcube microcosmos. In case you developed a plugin and it’s not yet listed in our repository, please read the how to submit plugins today.

We’d like to thank all the great free software projects from Nginx and PHP to Composer and Packagist as well as successful IT companies caring about open source like XS4All and GlobalSign to make this all happen.

Does snappy packaging make a convenient, portable, offline MediaWiki possible (say, on a thumb drive)?

Published 28 Jul 2016 by wattahay in Newest questions tagged mediawiki - Ask Ubuntu.

I LOVE MediaWiki, and would love to have one as a personal wiki solution on a thumb drive. There seem to be solutions for this on Windows, via XAMP. But from what I can tell, linux does not allow this.

Now that snaps are here, I am wondering if they make such a technology more accessible.

How does one go about creating a portable, offline Mediawiki, on -- say -- a thumb drive? (I apologize, because I realize this forum could be the wrong place if this has nothing to do with snaps.)

Thank you for any direction on this ahead of time.

Update 1.2.1 released

Published 25 Jul 2016 by Roundcube Webmail Dev Team in Roundcube Webmail Project News.

We just published the first service release to update the stable version 1.2. It contains some important bug fixes and improvements in the recently introduced Enigma plugin for PGP encryption. See the detailed changelog in here.

This release is considered stable and we recommend to update all productive installations of Roundcube with this version. Download it from

Please do backup your data before updating!

Living in a disrupted economy

Published 21 Jul 2016 by Carlos Fenollosa in Carlos Fenollosa — Blog.

There is this continuing discussion on whether technology destroys more jobs than it creates. Every few years, yet another tech revolution occurs, journalists publish articles, pundits share their opinions, politicians try to catch up, and those affected always voice their concerns. These couple years have been no exception, thanks to Uber, Airbnb, and the called sharing economy.

I'm a technologist and a relatively young person, so I am naturally biased towards technological disruption. After all, it is people like me who are trying to make a living by taking over older jobs.

I suggest that you take a few minutes to read a fantastic article titled The $3500 shirt. That essay reveals how horrible some industries were before they could be automated or replaced by something better. Go on, please read it now, it will only take three minutes.

Now, imagine you had to spend a couple of weeks of your time to make a t-shirt from scratch. Would that be acceptable? I guess we all more or less agree that the textile revolution was a net gain for society. Nevertheless, when it occurred, some Luddites probably complained, arguing that the loom put seamstresses out of work.

History is packed with dead industries. We killed the ice business with the modern fridge. We burn less coal for energy, so miners go unemployed. And let's not forget the basis of modern civilization, the agricultural revolution, which is the only reason us humans can feed ourselves. Without greenhouses, nitrates, tractors, pest protection and advancements in farming, humanity would starve.

Admittedly, it transformed the first sector from a 65% in workforce quota into the current 10%. Isn't it great that most of us don't need to wake up before sunrise to water our crops? In hindsight, can you imagine proclaiming that the 1800s way of farming is better because it preserves farming jobs?

The bottom line is that all economic transformations are a net gain for society. They may not be flawless, but they have allowed us humans to live a better life.

So why do some characters fight against current industry disruptions if history will prove them wrong?


As a European and a social democrat, I believe that States must regulate some economies to avoid monopolies and abuses, supporting the greater good. Furthermore, I sympathize with the affected workforce, both personally and in a macroeconomic level. All taxi drivers suddenly going jobless because of Uber is detrimental to society.

However, it pains me to see that European politicians are taking the opposite stance, brandishing law and tradition as excuses to hinder progress.

Laws must serve people, not the other way around. If we analyze the taxi example, we learn that there is a regulation which requires taxi drivers to pay a huge sum of money up front to operate. Therefore, letting anybody get in that business for free is unfair and breaks the rules of the game. Unsurprisingly, this situation is unfair not because of the new players, but because that regulation is obsolete.

It isn't ethically right that somebody who spent a lot of money to get a license sees their job at risk. But the solution isn't to block other players, especially when it's regulation which is at fault. Let's sit down, think how to establish a transition period, and maybe even reimburse drivers part of that money with the earnings from increased taxes due to a higher employment and economic activity.

There is a middle ground solution: don't change the rules drastically, but don't use these them as an excuse to impede progress.

At the end of the day, some careers are condemned to extinction. That is a real social drama, however, what should we do? Artificially stop innovation to save jobs which are not efficient and, when automated or improved, they make the world better for everyone?


Us millennials have learned that the concept of a single, lifetime profession just does not exist anymore. Previous generations do not want to accept that reality. I understand that reconverting an older person to a new career may be difficult, but if the alternative is letting that person obstruct younger people's opportunities, that's not fair.

Most professions decline organically, by the very nature of society and economy. It is the politicians' responsibility to mediate when this process is accelerated by a new industry or technology. New or automated trades will take their place, usually providing a bigger collective benefit, like healthcare, education, or modern farming.

Our duty as a society is to make sure everyone lives a happy and comfortable life. Artificially blocking new technologies and economic models harms everyone. If it were for some Luddites, we'd be still paying $3500 for a shirt, and that seamstress would never have been a nurse or a scientist.

Tags: law, startups

Comments? Tweet  

Roots and Flowers of Quaker Nontheism (Abridged)

Published 19 Jul 2016 by Os Cresson in

This abridged version of “Roots and Flowers of Quaker Nontheism” was compiled for the convenience of students of Quaker nontheism. An ellipses ( . . . ) or brackets ([ ]) indicate where material has been omitted. The original is a chapter in Quaker and Naturalist Too (Morning Walk Press of Iowa City, IA, in 2014, is available from The chapter includes text (pp. 65-103), bibliography (pp. 147-157), source notes (pp. 165-172), and references to 20 quotations that appear elsewhere in the book but are not in this abridged version.

Part I: Roots of Quaker Nontheism

This is a study of the roots of Quaker nontheism today. Nontheist Friends are powerfully drawn to Quaker practices but they do not accompany this with a faith in God. Nontheism is an umbrella term covering atheists, agnostics, secular humanists, pantheists, wiccaists, and others. You can combine nontheist with other terms and call yourself an agnostic nontheist or atheist nontheist, and so on. Some nontheists have set aside one version of God (e.g. as a person) and not another (e.g. as a word for good or your highest values). A negative term like nontheism is convenient because we describe our views so many different ways when speaking positively.

Many of the Quakers mentioned here were not nontheists but are included because they held views, often heretical in their time, that helped Friends become more inclusive. In the early days this included questioning the divinity of Christ, the divine inspiration of the Bible, and the concepts of heaven, hell, and immortality. Later Friends questioned miracles, the trinity, and divine creation. Recently the issue has been whether Quakers have to be Christians, or theists. All this time there were other changes happening in speech, clothing, marriage practices, and so on. Quakerism has always been in progress.

Views held today are no more authentic because they were present in some form in earlier years. However, it is encouraging to Quaker nontheists today to find their views and their struggle prefigured among Friends of an earlier day.

In the following excerpts we learn about Quaker skeptics of the past and the issues they stood for. These are the roots that support the flowers of contemporary Quaker nontheism. . . .

 First Generation Quaker Skeptics

Quakers were a varied group at the beginning. There was little effective doctrinal control and individuals were encouraged to think for themselves within the contexts of their local meetings. Many of the early traditions are key for nontheists today, such as the emphasis on actions other than talk and the injunction to interpret what we read, even Scripture. All the early Friends can be considered forerunners of the Quaker nontheists of today, but two people deserve special mention. Gerard Winstanley (1609–c.1660) was a Digger, or True Leveller, who became a Quaker. . . . He published twenty pamphlets between 1648 and 1652 and was a political and religious revolutionary. He equated God with the law of the universe known by observation and reason guided by conscience and love. Winstanley wrote,

“I’ll appeal to your self in this question, what other knowledge have you of God but what you have within the circle of the creation? . . . For if the creation in all its dimensions be the fullness of him that fills all with himself, and if you yourself be part of this creation, where can you find God but in that line or station wherein you stand.” [Source Note #1]

Winstanley also wrote,

“[T]he Spirit Reason, which I call God…is that spirituall power, that guids all mens reasoning in right order, and to a right end: for the Spirit Reason, doth not preserve one creature and destroy another . . . but it hath a regard to the whole creation; and knits every creature together into a onenesse; making every creature to be an upholder of his fellow.” [#2]

His emphasis was on the world around and within us: “O ye hear-say  Preachers, deceive not the people any longer, by telling them that this glory shal not be known and seen, til the body is laid in the dust. I tel you, this great mystery is begun to appear, and it must be seen by the material eyes of the flesh: And those five senses that is in man, shall partake of this glory.” [#3]

Jacob Bauthumley (1613–1692) was a shoemaker who served in the Parliamentary Army. . . . His name was probably pronounced Bottomley since this is how Fox spelled it. In 1650 he published The Light and Dark Sides of God, the only pamphlet of his that we have. This was declared blasphemous and he was thrown out of the army, his sword broken over his head, and his tongue bored. After the Restoration he became a Quaker and a librarian and was elected sergeant–at–mace in Leicester. For Bauthumley, God dwells in men and in all the rest of creation and nowhere else. We are God even when we sin. Jesus was no more divine than any person is, and the Bible is not the word of God. He wrote,

“I see that all the Beings in the World are but that one Being, and so he may well be said, to be every where as he is, and so I cannot exclude him from Man or Beast, or any other Creature: Every Creature and thing having that Being living in it, and there is no difference betwixt Man and Beast; but as Man carries a more lively Image of the divine Being then [than] any other Creature: For I see the Power, Wisdom, and Glory of God in one, as well as another onely in that Creature called Man, God appears more gloriously in then the rest. . . . And God loves the Being of all Creatures, yea, all men are alike to him, and have received lively impressions of the divine nature, though they be not so gloriously and purely manifested in some as in others, some live in the light side of God, and some in the dark side; But in respect of God, light and darkness are all one to him; for there is nothing contrary to God, but onely to our apprehension. . . . It is not so safe to go to the Bible to see what others have spoken and writ of the mind of God as to see what God speaks within me and to follow the doctrine and leadings of it in me.” [#4]

Eighteenth Century Quaker Skeptics

There were skeptical Quakers who asserted views such as that God created but does not run the universe, that Jesus was a man and not divine, that much of theology is superstition and divides people unnecessarily, and that the soul is mortal.

An example is John Bartram (1699–1777) of Philadelphia. . . . He was a farmer and perhaps the best known botanist in the American colonies. Bartram had a mystical feeling for the presence of God in nature and he supported the rational study of nature. In 1758 he was disowned by Darby Meeting for saying Jesus was not divine, but he continued to worship at that meeting and was buried there.

In 1761 he carved a quote from Alexander Pope over the door of his greenhouse: “Slave to no sect, who takes no private road, but looks through Nature up to Nature’s God.” In 1743 he wrote, “When we are upon the topic of astrology, magic and mystic divinity, I am apt to be a little troublesome, by inquiring into the foundation and reasonableness of these notions” In a letter to Benjamin Rush he wrote, “I hope a more diligent search will lead you into the knowledge of more certain truths than all the pretended revelations of our mystery mongers and their inspirations.” [#5] . . .

Free Quakers

These Friends were disowned for abandoning the peace testimony during the Revolutionary War. The Free Quakers cast the issue in more general terms. They supported freedom of conscience and saw themselves as upholding the original Friends traditions. They wrote:

“We have no new doctrine to teach, nor any design of promoting schisms in religion. We wish only to be freed from every species of ecclesiastical tyranny, and mean to pay a due regard to the principles of our forefathers . . . and hope, thereby, to preserve decency and to secure equal liberty to all. We have no designs to form creeds or confessions of faith, but [hope] to leave every man to think and judge for himself…and to answer for his faith and opinions to . . . the sole Judge and sovereign Lord of conscience.” [#6]

Their discipline forbade all forms of disownment: “Neither shall a member be deprived of his right among us, on account of his differing in sentiment from any or all of his brethren.” [#7]

There were several Free Quaker meetings, the longest lasting being the one in Philadelphia from 1781 to 1834.


. . . Hannah Barnard (1754–1825) of New York questioned the interpretation of events in the Bible and put reason above orthodoxy and ethics over theology. She wrote a manual in the form of a dialogue to teach domestic science to rural women. It included philosophy, civics, and autobiography. Barnard supported the French Revolution and insisted that masters and servants sit together during her visits. In 1802 she was silenced as a minister and disowned by Friends. She wrote,

“[N]othing is revealed truth to me, as doctrine, until it is sealed as such on the mind, through the illumination of that uncreated word of God, or divine light, and intelligence, to which the Scriptures, as well as the writings of many other enlightened authors, of different ages, bear plentiful testimony. . . . I therefore do not attach the idea or title of divine infallibility to any society as such, or to any book, or books, in the world; but to the great source of eternal truth only.” [#8]

Barnard also wrote, “under the present state of the Society I can with humble reverent thankfulness rejoice in the consideration that I was made the Instrument of bringing their Darkness to light.” [#9] On hearing Elias Hicks in 1819, she is said to have commented that these were the ideas for which she had been disowned. He visited her in 1824, a year before she died.

[Also mentioned in the original version of this essay are Job Scott (1751–1793), Abraham Shackleton (1752–1818), Mary Newhall (c.1780–1829) and Mary Rotch.]


The schism that started in 1827 involved many people but it is instructive to focus on one man at the center of the conflict. Elias Hicks (1748–1830) traveled widely, urging Friends to follow a God known inwardly and to resist the domination of others in the Society. He wrote,

“There is scarcely anything so baneful to the present and future happiness and welfare of mankind, as a submission to traditional and popular opinion, I have therefore been led to see the necessity of investigating for myself all customs and doctrines . . . either verbally or historically communicated . . . and not to sit down satisfied with any thing but the plain, clear, demonstrative testimony of the spirit and word of life and light in my heart and conscience.” [#10]

Hicks emphasized the inward action of the Spirit rather than human effort or learning, but he saw a place for reason. He turned to “the light in our own consciences, . . . the reason of things, . . . the precepts and example of our Lord Jesus Christ, (and) the golden rule.” [#11]

[Also mentioned: Benjamin Ferris (1780–1867).]

Manchester Free Friends

David Duncan (c.1825–1871), a former Presbyterian who had trained for the ministry, was a merchant and manufacturer in Manchester, England. He married Sarah Ann Cooke Duncan and became a Friend in 1852. He was a republican, a social radical, a Free Thinker, and an aggressive writer and debater. Duncan began to doubt Quaker views about God and the Bible and associated the Light Within with intellectual freedom. He developed a following at the Friends Institute in Manchester and the publication of his Essays and Reviews in 1861 brought the attention of the Elders. In it he wrote, “If the principle were more generally admitted that Christianity is a life rather than a formula, theology would give place to religion . . . and that peculiarly bitter spirit which actuates religionists would no longer be associated with the profession of religion.” [#12] In 1871 he was disowned and then died suddenly of smallpox. Sarah Ann Duncan and about 14 others resigned from their meeting and started what came to be called the Free Friends.

In 1873, this group approved a statement which included the following:

“It is now more than two years and a quarter since we sought, outside of the Society of Friends, for the liberty to speak the thoughts and convictions we entertained which was denied to us within its borders, and for the enjoyment of the privilege of companionship in “unity of spirit,” without the limitations imposed upon it by forced identity of opinion on the obscure propositions of theologians. We were told that such unity could not be practically obtained along with diversity of sentiment upon fundamental questions, but we did not see that this need necessarily be true where a principle of cohesion was assented to which involved tolerance to all opinions; and we therefore determined ourselves to try the experiment, and so remove the question, if possible, out of the region of speculation into that of practice. We conceived one idea in common, with great diversity of opinion amongst us, upon all the questions which divide men in their opinions of the government and constitution of the universe. We felt that whatever was true was better for us than that which was not, and that we attained it best by listening and thinking for ourselves.” [#13]

Joseph B. Forster (1831–1883) was a leader of the dissidents after the death of David Duncan. (For another excerpt, see pp. 17.) He wrote, “[E]very law which fixes a limit to free thought, exists in violation of the very first of all doctrines held by the Early Quakers,—the doctrine of the ‘Inner Light’.” [#14]

Forster was editor of a journal published by the Free Friends. In the first issue he wrote,

“We ask for [The Manchester Friend] the support of those who, with widely divergent opinions, are united in the belief that dogma is not religion, and that truth can only be made possible to us where perfect liberty of thought is conceded. We ask for it also the support of those, who, recognizing this, feel that Christianity is a life and not a creed; and that obedience to our knowledge of what is pure and good is the end of all religion. We may fall below our ideal, but we shall try not to do so; and we trust our readers will, as far as they can, aid us in our task.” [#15]

[Also mentioned: George S. Brady (1833–1913).]

Progressive and Congregational Friends

The Progressive Friends at Longwood (near Philadelphia) were committed to peace, and the rights of women and blacks, and were also concerned about church governance and doctrine. . . . Between 1844 and 1874 they separated from other Hicksite Quakers and formed a monthly meeting and a yearly meeting. They asked, “What right had one Friend, or one group of Friends, to judge the leadings of others?” [#16] They objected to partitions between men’s and women’s meetings and the authority of meeting elders and ministers over the expression of individual conscience and other actions of the members. There were similar separations in Indiana Yearly Meeting (Orthodox) in the 1840s, Green Plain Quarterly Meeting in Ohio in 1843 and in Genesee Yearly Meeting (Hicksite) in northern New York and Michigan and in New York Yearly Meeting in 1846 and 1848.

A Congregational Friend in New York declared,

“We do not require that persons shall believe that the Bible is an inspired book; we do not even demand that they shall have an unwavering faith in their own immortality; nor do we require them to assert a belief in the existence of God. We do not catechize men at all as to their theological opinions. Our only test is one which applies to the heart, not to the head. To all who seek truth we extend the hand of fellowship, without distinction of sex, creed and color. We open our doors, to all who wish to unite with us in promoting peace and good will among men. We ask all who are striving to elevate humanity to come here and stand with us on equal terms.” [#17]

In their Basis of Religious Association Progressive Friends at Longwood welcomed “all who acknowledge the duty of defining and illustrating their faith in God, not by assent to a creed, but lives of personal purity, and works of beneficence and charity to mankind.” They also wrote,

“We seek not to diminish, but to intensify in ourselves the sense of individual responsibility. . . . We have set forth no forms or ceremonies; nor have we sought to impose upon ourselves or others a system of doctrinal belief. Such matters we have left where Jesus left them, with the conscience and common sense of the individual. It has been our cherished purpose to restore the union between religion and life, and to place works of goodness and mercy far above theological speculations and scholastic subtleties of doctrine. Creed–making is not among the objects of our association. Christianity, as it presents itself to our minds, is too deep, too broad, and too high to be brought within the cold propositions of the theologian. We should as soon think of bottling up the sunshine for the use of posterity, as of attempting to adjust the free and universal principles taught and exemplified by Jesus of Nazareth to the angles of a manmade creed.” [#18]

Between 1863 and 1874 many of the Friends at Longwood were taken back into membership by their meetings. By the time of the birth of modern liberal Quakerism at the turn of the century, many Friends in unprogrammed meetings had become progressives.

Quaker Free Thinkers

Liberal religious dissenters in the nineteenth century were called Free Thinkers. Lucretia Mott (1793–1880) worked for abolition of slavery, women’s suffrage, and temperance. . . . Her motto was “Truth for authority, and not authority for truth.” She refused to be controlled by her meeting but also refused to leave it. Her meeting denied permission to travel in the ministry after 1843 but she went anyway. Mott was a founding member of the Free Religious Association in 1867, when she told them, “I believe that such proving all things, such trying all things, and holding fast only to that which is good, is the great religious duty of our age. . . . Our own conscience and the Divine Spirit’s teaching are always harmonious and this Divine illumination is as freely given to man as his reason, or as are many of his natural powers.” She also said, “I confess to great skepticism as to any account or story, which conflicts with the unvarying natural laws of God in his creation.” [#19] . . . In 1849 Mott said,

“I confess to you, my friends, that I am a worshipper after the way called heresy—a believer after the manner many deem infidel. While at the same time my faith is firm in the blessed, the eternal doctrine preached by Jesus and by every child of God since the creation of the world, especially the great truth that God is the teacher of his people himself; the doctrine that Jesus most emphatically taught, that the kingdom is with man, that there is his sacred and divine temple.” [#20]

On another occasion she said, “Men are too superstitious, too prone to believe what is presented to them by their church and creed; they ought to follow Jesus more in his non–conformity. . . . I hold that skepticism is a religious duty; men should question their theology and doubt more in order that they might believe more.” [#21]

Elizabeth Cady Stanton wrote in her diary that Mott said to her,

“There is a broad distinction between religion and theology. The one is a natural, human experience common to all well–organized minds. The other is a system of speculations about the unseen and the unknowable, which the human mind has no power to grasp or explain, and these speculations vary with every sect, age, and type of civilization. No one knows any more of what lies beyond our sphere of action than thou and I, and we know nothing.” [#22] . . .

Another Free Thinker was Susan B. Anthony (1820–1906). She was an active supporter of rights for women, abolition of slavery, and temperance. Raised a Quaker, she considered herself one even after she joined the Unitarians because her meeting failed to support abolition. Her friend, Elizabeth Cady Stanton, called her an agnostic. She refused to express her opinion on religious subjects, saying she could only work on one reform at a time. In 1890 she told a women’s organization, “These are the principles I want to maintain—that our platform may be kept as broad as the universe, that upon it may stand the representatives of all creeds and of no creeds—Jew and Christian, Protestant and Catholic, Gentile and Mormon, believer and atheist.” In a speech in 1896 she said, “I distrust those people who know so well what God wants them to do, because I notice it always coincides with their own desires. . . . What you should say to outsiders is that a Christian has neither more nor less rights in our association than an atheist. When our platform becomes too narrow for people of all creeds and of no creeds, I myself can not stand upon it.” When asked in an interview in 1896 “Do you pray?”, she answered, “I pray every single second of my life; not on my knees, but with my work. My prayer is to lift women to equality with men. Work and worship are one with me. I know there is no God of the universe made happy by my getting down on my knees and calling him ‘great’.” In 1897 she wrote, “(I)t does not matter whether it is Calvinism, Unitarianism, Spiritualism, Christian Science, or Theosophy, they are all speculations. So I think you and I had better hang on to this mundane sphere and keep tugging away to make conditions better for the next generation of women.” Anthony said to a group of Quakers in 1885, “I don’t know what religion is. I only know what work is, and that is all I can speak on, this side of Jordan.” [#23]

Elizabeth Cady Stanton (1815–1902) was a leader of the women’s suffrage movement for fifty-five years and one of the most famous and outspoken Free Thinkers of her day. She was a member of Junius Monthly Meeting, a Congregational meeting in upstate New York, during their first ten years after splitting off from Genesee Yearly Meeting in 1848. As a child she was terrified by preaching about human depravity and sinners’ damnation. Later she wrote, “My religious superstitions gave place to rational ideas based on scientific facts, and in proportion, as I looked at everything from a new standpoint, I grew more happy day by day.” [#24] She also wrote,

“I can say that the happiest period of my life has been since I emerged from the shadows and superstitions of the old theologies, relieved from all gloomy apprehensions of the future, satisfied that as my labors and capacities were limited to this sphere of action, I was responsible for nothing beyond my horizon, as I could neither understand nor change the condition of the unknown world. Giving ourselves, then, no trouble about the future, let us make the most of the present, and fill up our lives with earnest work here.” [#25]

[Also mentioned: Maria Mitchell (1818–1889).]

Modern Liberal Friends

. . . Joseph Rowntree (1836–1925) was a chocolate manufacturer and reformer of the Religious Society of Friends and of society in general. He helped craft the London Yearly Meeting response to the Richmond Declaration of 1887, when he wrote, “(T)he general welfare of the Society of Friends the world over will not be advanced by one Yearly Meeting following exactly in the footsteps of another, but by each being faithful to its own convictions and experience. This may not result in a rigid uniformity of either thought or action, but it is likely to lead to something far better—to a true and living unity.” [#26]

The conference of Friends in Manchester in 1895 was a clear declaration of their views, as was the first Summer School (on the British model) at Haverford College in 1900, the founding of Friends General Conference in 1900 and American Friends Service Committee in 1917.

William Littleboy (c.1852–1936) and wife Margaret Littleboy were among the first staff at Woodbrooke Quaker Study Centre. William Littleboy was an advocate of ethical living as basis for religion, and of opening the Religious Society of Friends to skeptics. In 1902 he wrote to Rufus Jones urging consideration be given to Quakers who do not have mystical experiences, and in 1916 he published a pamphlet, The Appeal of Quakerism to the NonMystic. In it he wrote,

“We know that to some choice souls God’s messages come in ways which are super–normal, and it is natural that we should look with longing eyes on these; yet such cases are the exception, not the rule. . . . Let us then take ourselves at our best. [Non–mystics] are capable of thought and care for others. We do at times abase ourselves that others may be exalted. On occasion we succeed in loving our enemies and doing good to those who despitefully use us. For those who are nearest to us we would suffer—perhaps even give our life, because we love them so. . . . To the great non–mystic majority [the Quaker’s] appeal should come with special power, for he can speak to them, as none other can whose gospel is less universal.” [#27]

This influenced the young Henry Cadbury who many years later said, “I am sure that over the years [William Littleboy’s] perceptive presentation of the matter has brought real relief to many of us.” [#28]

[Also mentioned: Arthur Stanley Eddington (1882–1934), Joel Bean (1825–1914) and Hannah Shipley Bean (1830–1909).]


Some Friends worked their entire lives to bring together dissident branches of the Religious Society of Friends. Examples are Henry Cadbury and Rufus Jones. They based their call for reunification on the same grounds that nontheist Friends rely on today. These included an emphasis on practice rather than beliefs; the idea that Quakers need not hold the same beliefs; describing Quaker beliefs in the meeting discipline by quoting from the writings of individuals; the idea that religiously inspired action can be associated with many different faiths; the love of diversity within the Religious Society of Friends; the view that religion is a matter our daily lives; and the emphasis on Jesus as a person rather than doctrine about Jesus.

These bases for reunification among Friends also serve to include nonmystics, nonChristians, and people of other faiths including nontheist faiths.

NonChristian Friends

At regular intervals during the history of Friends there is discussion about whether we have to be Christian to be Quaker. This is often in the form of an exchange of letters in a Quaker journal. One such flurry was prompted by two letters from Watchman in The Friend in 1943 and 1944 (reprinted in 1994).

In 1953 Arthur Morgan proposed inviting people of other faiths to join Friends. In 1966 Henry Cadbury was invited to address the question in a talk given at the annual sessions of Pacific Yearly Meeting. In his view Quakerism and Christianity represent sets of beliefs from which individuals make selections, with no one belief required of all. Quaker universalists have raised the issue many times (for example, John Linton in 1979 and Daniel A. Seeger in 1984). [#29]

Universalist Friends

The Quaker Universalist Group was formed in Britain in 1979, and the Quaker Universalist Fellowship in the United States in 1983. Among the founders were nontheists John Linton and Kingdon W. Swayne. It is a diverse movement. For the early Friends universalism meant that any person could be saved by Christ. Today, for some Friends universalism is about accepting diversity of religious faith. For others it is an active searching for common aspects of different faiths. Universalism can also mean an effort to learn from each other and live together well and love each other, differences and all.


Over the years, many Quakers stood against the doctrinal views of their times. They represent a continual stream of dissent and a struggle for inclusiveness that started with the birth of our Society. What was rejected at one point was accepted later. Much of what Friends believe today would have been heresy in the past.

Through the years, certain traditions in the Religious Society of Friends have supported the presence of doctrinal skeptics. This included being noncreedal, tolerant, and universalist; concern for experience rather than beliefs; authority of the individual as well as the community, interpreting what we read; and the conviction that Quaker practice and Quaker membership do not require agreement on religious doctrine.

Many Quaker practices are typically explained in terms of God, Spirit or the Inner Light, such as worship, leadings, discernment, the sense of the meeting, and continual revelation. Nontheist Friends embrace the practices without the explanation.

Part II: Flowers Of Quaker Nontheism

This is a look at Quaker nontheism flowering today. Nontheist Friends, by and large, do not experience, accept or believe in a deity. As a negatively defined term, nontheism provides a broad tent for people who hold many different positive views.

In general, nontheists support diversity of thought in the Religious Society of Friends. They bless what theists and nontheists bring to their meetings and the opportunities that come with diversity. They have been cautious about forming their own organizations because they want to join rather than separate from theists Friends. They hope we will accept each other as Quakers, without adjectives.

The material gathered here represents the flowering of Quaker nontheism.

Proto–Nontheist Friends

These Friends were humanists who showed a tender concern for religious skeptics but they did not publicly address the issue of nontheism. We do not know what their personal views were (or are) and it doesn’t matter. It is enough that they helped create the Religious Society of Friends of today that includes meetings that welcome nontheists.

Jesse Herman Holmes (1863–1942) was a passionate advocate for Quakerism free of creeds. . . . In 1928 in “To the Scientifically–Minded” he wrote, “[Friends] have no common creed, and such unity—of which there is a great deal—as there is among them is merely due to the fact that impartial minds, working on the same conditions, arrive at similar conclusions. However, we demand no unity of opinion, but find interest and stimulus in our many differences.” [#30]

Holmes did not see religion as establishing truth. He wrote in 1912: “The accurate formulating of our ends and of the tested ways of attaining them is the function of philosophy and the sciences. The more difficult task of holding ourselves to the higher loyalties is that of religion. Not the discovery of truth but the patient using of it for the more abundant life is its task.” He saw that Friends can provide a congenial home for scientists, and in fact we need them. [#31]

In private Jesse Holmes could be outspoken. In a manuscript that was not published until 61 years after his death, he wrote:

“Meaningless phrases and irrational theologies have been moulded into rigid, authoritative institutions perverting and stultifying the adventurous, creative spirit which distinguishes us from the rest of the animal kingdom. They turn our attention from the splendid possibilities of our mysterious life and toward a mythical, improbable life after death. Over all presides a despotic, unjust, and irrational deity of the medieval king type, who must be worshipped by flattery and blind obedience. . . . I propose to a fairly intelligent people of a partially scientific age…that all this is a sad mess of ancient and medieval superstition which should speedily be relegated to the storage rooms of the museum of history. We should stop the pretense of awe, or even respect, for teachings which lack even a slight amount of evidence or probability. We should substitute a religion based on actual repeatable, describable and testable experience, and which has some connection with the genuine values of life: not an absurd and impossible life in a stupid, idle heaven, but a rich, active, adventurous life in the world we live in. . . . [I]f those who reject all this medieval rubbish will join heartily in a real world–wide effort for an uplifted humanity; if they refuse to continue systems which involve contests in indiscriminate killing and destruction; if they will dedicate themselves to a general cooperation in mutual service, refusing all incitements to seek poser over each other; if they will accept the adventure of lives everywhere seeking harmony, good–will, understanding, friendliness; if they will turn aside from all claims of super–men for super–rights and privileges, whether in religion, in politics, in industry or in society; then indeed we may renew and revive the purposes of prophets, statesmen, scholars, scientists, and good people since the world began. This would be a real religion.” [#32]

Henry Joel Cadbury (1883–1974) was an outspoken advocate for a variety of Quakerism without mysticism, unity based on love rather than dogma, beliefs as collateral effects rather than sources of action, ethical living as religion, and the possibility of life as spontaneous response to passing situations. . . . He worked his entire life for unity among Friends. He was an historian of the Religious Society of Friends, a Biblical scholar, a social activist, and a humorist. Cadbury hid his personal beliefs, preferring to help others find their way. He did lift the veil once when he wrote a manuscript that he apparently read to his divinity students in 1936. (It was not published until 2000.) He stated,

“I can describe myself as no ardent theist or atheist. . . . My own religion is mainly neither emotional nor rational but expresses itself habitually or occasionally in action. . . . If you know John Woolman’s Journal you will know what I mean by a religious personality in action. . . . The amazing revelation which he gives is that of a sensitive conscience feeling its course in a series of soul–searching problems — public problems that he felt must be personally decided. Such forms of religion do not often get recorded, but they are none the less real and important. . . . And what is the real test or evidence of religion that I can offer in myself? . . . It is whether in all our contacts . . . you can conclude that not consciously nor for display I represent the manner of reaction that befits a religious personality in action.” [#33]

In the Swarthmore Lecture in 1957, recalling how helpful it had been to read William Littleboy’s The Appeal of Quakerism to the NonMystic, Cadbury said,

“Someone ought to write a pamphlet The Appeal of Quakerism to the NonTheological to help them with their inferiority complex. . . . They seem to others and perhaps to themselves subject to some defect. Perhaps it is intellectual laziness, or some congenital skepticism. . . . The repetitious recourse to any doctrinal framework, including the one most in fashion in the Society at the time, they do not find helpful to themselves, and they regard it as perhaps their duty and privilege to seek for or to exemplify other aspects of truth to supplement the limited emphasis. It is not that they wish to deny what the theologian affirms, but that they find his approach uncongenial and irrelevant to their own spiritual life, and they are indifferent or even pained or estranged when it is made central in the definition of Quakerism. . . . It does not speak to their condition. Their search is not for a more satisfactory theology, they do not believe that for them spiritual progress depends upon such factors. The obscurity of the mysteries of God does not really bother them and they have no confidence that even the most rational of religious analyses would add a cubit to their moral stature. They have, therefore, neither the will nor the competence to deal with the situation, but they hold their peace by simply keeping their own counsels without contradiction or controversy.” [#34]

Arthur Morgan (1878–1975) was an engineer, educator, and utopian. . . . He was president of Antioch College, chairman and chief engineer of the Tennessee Valley Authority, a founder of Celo Community and the Fellowship of Intentional Communities. He was a Unitarian who became a Friend in 1940. In 1953 Morgan proposed a minute to his yearly meeting opening Friends membership to people of other faiths. In it he stated,

“Many men and women of many faiths have shared in the search for truth and love and human brotherhood. Each faith has helped its sincere followers in that search. Each faith has something to learn from the others, and something to give. The Lake Erie Association of Friends desires to be a unit of such a brotherhood, and welcomes into its membership and to its meetings all sincere, concerned seekers whose ways of life and ethical standards and practices are compatible with its own.” [#35]

The minute was not approved.

Arthur Morgan declined to sign the Humanist Manifesto in 1933. He saw positive value in religion and did not want it cast aside. In a letter published in the same issue of The New Humanist as the manifesto, he wrote:

“I believe that unless the Humanist movement achieves a better distribution of emphasis, it will act as a sectarian movement to divide those who have one partial view of the issues of life from those who have another partial view . . . [A]ny vital religion must give great emphasis to faith, which in essence is an unproven conviction of the significance of living. . . . Faith, hope, and love are usually transmitted by contagion from persons who possess those qualities, but the human associations which transmit them generally have transmitted also an uncritical credulity. . . . Those who are free from that uncritical credulity commonly are also free to a considerable extent from the faith, the hope, and the warm love of men which so commonly accompanied that credulity in our religious history, when nearly all men were credulous. . . . The problem of humanism is . . . to hold faithfully to a completely open–minded and critical attitude, while holding to, or eagerly seeking, the strong drives of faith, hope, and love. As such strong drives appear they will express themselves in heroic living, and by contagion will be transmitted. . . . Religion should instill a hot partisanship for life which shall set for science the task of finding significance or of creating it. ‘Wishful thinking’, if wisely inspired, may cause the discovery or creation of the values wished for. Our business is to find significance, or to create it.” [#36]

[Also mentioned: Morris Mitchell (1895–1976), E. A. Burtt (1892–1989), Richard S. Peters (1919–2011) and Alice and Staughton Lynd.]

Nontheist Friends

The first public expression of nontheism among Friends that I know of was the Humanistic Society of Friends, founded in Los Angeles in 1939. Many of the members had been Quakers, including their leader, Lowell H. Coate (1889–1973), but their literature did not mention Quakers. Coate later served as editor of The Humanist World, American Rationalist, and The Rationalist. The Society published The Humanist Friend, from 1939 to 1943, and continued as an organization until it became a chapter of the American Humanist Association in 1987. Three years later the chapter became a division of AHA. It was given responsibility for ministerial and religious humanism programs.

In 1963 Claire Walker wrote in Friends Journal, “Questing Quakers cannot feel comfortable with the supernatural in any form, but they are very clear about the crucial importance in our lives of values and implementation of values in our day-to-day living.” This was followed by Joseph Havens’s call for the study of post-Christian Quakerism, and Lawrence Miller’s review of John Robinson’s Honest to God which asked what sort of God, if any, is required in religion. Later in 1964, the words “nontheist” and “nontheistic” appeared in four Friends Journal letters about Daniel A. Seeger’s successful effort to end the government practice of defining religion in terms of belief in a Supreme Being when considering applications for religious exemption from military service. Quakerism in the absence of God was now being considered. [#37]

The first public gathering of nontheist Friends that I know of was the “Workshop for Non-Theistic Friends” at the FGC Gathering in Ithaca NY in 1976. Their published report was written collectively by 15 to 20 Friends led by Robert Morgan (1913–1992). It is a stirring declaration:

“There are non-theistic Friends. There are Friends who might be called agnostics, atheists, skeptics but who would, nevertheless, describe themselves as reverent seekers. The fifteen to twenty of us who joined this workshop did so out of the need to share ideas with others who are searching for an authentic personal religious framework. The lack of an adequate religious vocabulary which could be used as an alternative to traditional concepts has led to mistaken assumptions about individual non–traditional beliefs, thus hindering dialogue and real communication among Friends. . . . We share a respect and concern for all human beings. We shared an admiration for the history of Quaker altruism and, a desire to be part of our own Meeting “families.” Welcoming diversity, we were stimulated in our own thinking by listening to the beliefs of others. It is exciting to share these beliefs, but it is even more exciting to sense that we all had experienced important values and feelings that can not be adequately expressed intellectually. For us these values have given truth and meaning and zest to everyday life and an experience of religion as a growing, evolving concept. . . . Why do we belong to the Religious Society of Friends? In part because we feel the need to seek from within a loving and traditionally tolerant, gathered community. We found in our group that we were representative of a rainbow of beliefs which exists within the larger Society of Friends. This spectrum included theists who define God as a spirit or presence which intervenes and guides in a personal way. Most were non–theists who, while believing in something universal beyond our biological selves which exists in everyone, do not believe in an external directing spirit. . . . We hope for sensitivity and trust in our Meetings which allow us to grow in a community of seekers despite our differences. Unable to accept traditional theology, we are skeptical about substituting new concepts lest they become yet another theological system, but we felt it important to share the thoughts that sprang from this workshop with old and new Friends, young Friends and those who are considering becoming Friends. We believe Quakerism can accommodate this minority, and find part of its vital creativity in the process.” [#38]

John Linton (c.1911–2010) was one of the founders of the Quaker Universalist Group in 1977. He wrote,

“This new group would be committed to the view that, however great one’s reverence for the teachings and personality of Jesus, no faith can claim to be a unique revelation or to have a monopoly of truth. Because Christianity traditionally makes this claim, members of the group feel that they cannot limit themselves by calling themselves Christians. In their search for truth, and also in the interests of world peace and brotherhood, they are opposed to all divisive religious claims. They take the view that truth can be reached by more than one path. Yet because they believe in the Quaker way of life, and that Quakerism is universally valid and not dependent on Christianity, they have no wish to cut themselves off from the Society of Friends.” [#39]

In 1979 he wrote,

“It seems to me that the Society would be greatly strengthened by the influx of people who claim to be agnostic rather than Christian and yet who sincerely share the fundamental aspirations of Quakers. I shall therefore argue not merely that the Society should admit such people as a fringe element of ‘second–class members’ (which is what they feel at the present), but that it should widen its own basis and give up its claim to be a specifically Christian organization. I think this should be done not just as a matter of expediency, but in the pursuit of Truth, because I believe the Truth is wider than Christianity. And I like to think that Quakerism is about the search for Truth.” [#40]

Kingdon W. Swayne (1920–2009) published “Confession of a Post–Christian Agnostic” in 1980. Four years later, Philadelphia Yearly Meeting selected him as their clerk. He wrote,

“My own religious life can perhaps be best understood as an effort to build moral stability and connectedness by creating a web of motivation and behavior that is internally consistent and emotionally satisfying. I describe myself as post–Christian because both my best behavior and its motivations owe much to Christian thinking, though I reject most of the traditional theology. . . . If one rejects the authority [of Jesus] and most of the Christian tradition, where does one begin to build a belief system? I think I begin with the existentialist proposition that life without meaning or purpose is intolerable. Therefore one must define the meaning and purpose of one’s own life. I believe this task is within my power and is my sole responsibility. I prefer to see myself not as finding and doing God’s will but as striving for goodness on the basis of general principles that are derived from my own sense of the nature of the universe. . . .” [#41]

In 1986 Swayne wrote,

“I am a lifelong Friend who was been encouraged by his Quaker (dare I say Hicksite?) upbringing to construct his own edifice of religious meaning. My edifice is non–theistic . . . I don’t think it is terribly important how Universalistic or how Christocentric the early Friends were. The important point is that late 20th century Quakerism be true to its non–creedal self. For its role in the larger religious society of our era surely is as home and refuge for those stubborn individualists who create their own theologies but need a community in which to pursue their spiritual journeys.” [#42]

[Also mentioned: Eric Johnson (c1918–1994).]


Several surveys show the presence of nontheists among Friends. In Britain in 1989, 692 Quakers were asked “Do you believe in God?” and 26% answered “No” or “Not Sure”. In Philadelphia in 2002, 56% of 552 Quakers indicated “No” or “No Definite Belief” in response to the statement, “I believe in a God to whom one can pray in the expectation of receiving an answer. By ‘answer’ I mean more than the subjective, psychological effect of prayer. [italics in the original]” In the same survey, 44% disagreed, or neither agreed nor disagreed, with the statement “I very much want a deeper spiritual relationship with God,” and 52% did not agree with the statement “I have had a transcendent experience where I felt myself in the presence of God.” These polls are described in David Rush’s chapter in Godless for God’s Sake: Nontheism in Contemporary Quakerism. Also see Rush’s interviews with 199 nontheist Friends. [#43]


There was a nontheist workshop at the Friends General Conference Gathering in 1976, and then none until Robin Alpern, Bowen Alpern, and Glenn Mallison held one in 1996. Since then there have been one or two nontheist workshops almost every year. Robin Alpern and David Boulton have written histories of these events. [#44]

In 2004 and 2011-14 there were workshops at the Woodbrooke Quaker Study Centre in Birmingham, England, and in 2005 at Pendle Hill, a Quaker center for study and contemplation near Philadelphia. These were attended by about 30 people each time. A strong desire was expressed to support other Friends, whatever their religious views are, and to be supported in turn.

There have also been nontheist Friends events at Powell House in New York Yearly Meeting, Ben Lomond Quaker Center in Pacific Yearly Meeting, and in other locations.

Internet Sites

A website with Quaker nontheist writings, blog, and email discussion group is It was established in 2003 and is recognized as an affinity group by Friends General Conference. [For the “Welcome Statement,” see the website.]

The Nontheist Friends Network was organized in 2011 and is a listed informal group of Britain Yearly Meeting. They have a website, http://, an email discussion group and newsletter, and they sponsor an annual conference and other events. [For their purposes, see the website.]

A leaflet of the Nontheist Friends Network contains this message:

“Whether we describe ourselves as humanists, agnostics or atheists, and whether we understand God as the symbol and imagined embodiment of our highest human values or avoid the word altogether, nontheist Friends know that we don’t know it all. Our various ways of being nontheist are simply various ways of being Quaker, and we celebrate the radical diversity of Quakerism, nontheist and theist. We do not see ourselves as on the Quaker fringe but as part of the broad mainstream, with something to give and much to learn from the ongoing Quaker tradition. We too are Friends and seekers.” [#45]


Many Quaker humanists and nontheists have published their writings, especially in recent years. [For list of NTF writings, see]

In 2006 David Boulton edited and published a collection of essays by 27 Quaker nontheists titled Godless for God’s Sake: Nontheism in Contemporary Quakerism. [#46] This book was reviewed by Chuck Fager. He wrote,

“What have we come to in Friends religious thought, when the most exciting book of Quaker theology I’ve read in years is produced by a bunch of Quaker non-theists—twenty-seven in all? Well, there will be no hand-wringing about that here: I’ll take thoughtful, articulate, and challenging religious thought wherever I can find it—and there’s plenty of that in this compact volume. . . . What was it that The Man said? “By their fruits ye shall know them.” If that’s so, then as a group, nontheist Friends have as much claim to a legitimate place in contemporary Quakerism as many who feel they are defending the last true redoubt against the invading forces of unbelief. The proper response to the testimonies in these pages is not scorn or witchhunts, but an invitation to further conversation. And in my case, gratitude that these nontheists have taken the theology they don’t accept seriously enough to think and write about it as thoughtfully and engagingly as they have here.” [#47]


There have always been nontheist Friends, although they have not always spoken out. There have also been Friends whose views were compatible with nontheism, such as the view that Jesus was human like the rest of us, or that the Inner Light can be identified with natural processes such as the human conscience.

In 1913 a group of adult young Friends spent a year studying the condition of the two Philadelphia Yearly Meetings. In their report they wrote,

“[O]ne of the inherited features of Hicksite Quakerism is a deliberate indifference to uniformity of belief. As the Intelligencer says: “Our attitude has been one that in no way tended to uniformity of belief, and as a matter of fact we have had wide divergence of belief. We have hardly a meeting that has not had at times, at least, among its most active and influential members, those of varying shades of belief, all the way from literal interpretation of Scripture to Unitarian, agnostic and even atheistic doctrine.” Most Hicksites have little interest in theology.” [#48]

The young Friend who wrote the report is not named but is said to have been Henry Cadbury. [#49]

Since then nontheism has gradually emerged into public view. Survey data support the sense that there are nontheists in Quaker meetings today, and probably more than are generally known. Many of them may be silent for positive reasons, being comfortable in their meetings and having more important things to talk about.

Diversity is a good thing if we don’t pay the price of keeping silent about what we hold dear. We are just beginning to learn how to be diverse. One hundred years after the young Friends in Philadelphia studied how their yearly meetings might reunite, a committee of Quaker Earthcare Witness (QEW) approved a statement on unity with diversity. They wrote,

“As both Friends and environmentalists we on the Spiritual Nurturance Committee of QEW hold a variety of personal views, beliefs and approaches based in the variety of our backgrounds, traditions and experiences. We see it as good for QEW to endeavor to work with all who share our basic goals, both QEW participants and others. . . . Within the Spiritual Nurturance Committee we have collectively lived out the experience of acknowledging diversity while seeking and remaining in unity. We value inclusivity in our relations with each other. We commit ourselves to trying to focus on the spirit rather than the letter, listening and speaking from the heart, and seeking and sharing from the heart, in the manner of Friends. We recommend this model to QEW for our work with one another and with other organizations. We offer the seeming paradox of diversity within the supportive and inclusive structure of our unity.” [#50]

It is good to work for acceptance of diverse philosophical points of view among Friends, especially views not held by the person speaking. Practices that facilitate the inclusion of one set of people, such as nontheists, are practices that are good for the meeting as a whole.

On these pages you have read about an incredible community of religious thinkers. It has been a joy for me to bring you together with them.


Please send additional material or references to

Anthony, Susan B.

Anthony, Katharine. Susan B. Anthony: Her Personal History and Her Era. NY: Doubleday, 1954.

Barry, Kathleen. Susan B. Anthony: A Biography of a Singular Feminist. NY: New York University Press, 1988.

Sherr, Lynn. Failure is Impossible: Susan B. Anthony In Her Own Words. NY: Random House, 1995.

Jacob Bauthumley

Bauthumley, Jacob. The Light and Dark Sides of God . London: William Learner, 1650. Also in Nigel Smith. A Collection of Ranter Writings from the 17th Century. London: Junction Books, 1983.

Cohn, Norman. The Pursuit of the Millennium: Revolutionary Millenarians and Mystical Anarchists of the Middle Ages. New York: Oxford University Press, 1970.

Hill, Christopher. The World Turned Upside Down: Radical Ideas During the English Revolution. NY: Viking Press, 1972.

Barnard, Hannah

American National Biography, s.v. “Barnard, Hannah Jenkins.”

Barnard, Hannah. Dialogues on Domestic and Rural Economy, and Fashionable Follies of the World. Interspersed with Occasional Observations on Some Popular Opinions. To Which is Added an Appendix, on Burns, etc. with their Treatment. Hudson, NY: Samuel W. Clark, 1820.

Cresson, Os. “Hannah Barnard’s Story.” Unpublished manuscript, 2006.

Fager, Chuck. “Hannah Barnard—A Liberal Quaker Hero.” Friends Journal 42 no. 1 (1996): 11–12.

Frost, J. William. The Records and Recollections of James Jenkins. Texts and Studies in Religion, Vol. 18. NY: Edwin Mellen Press, 1984, pp. 339–80.

Maxey, David. “New Light on Hannah Barnard, A Quaker ‘Heretic’,” Quaker History (Fall, 1989): 61–86.

Bartram, John

Clarke, Larry R. “The Quaker Background of William Bartram’s View of Nature.” Journal of the History of Ideas 46, no. 3 (1985): 435–448.

Darlington, William and Peter Collinson, eds. Memorials of John Bartram and Humphrey Marshall. NY: Hafner, 1967.

Wilson, David Scofield. In the Presence of Nature. Amherst MA: University of Massachusetts Press, 1978.

Cadbury, Henry J.

Bacon, Margaret Hope. Let This Life Speak: The Legacy of Henry Joel Cadbury. Philadelphia: University of Pennsylvania Press, 1987.

Bacon, Margaret Hope. Henry J. Cadbury: Scholar, Activist, Disciple. Pamphlet #376. Wallingford, PA: Pendle Hill, 2005.

Cadbury, Henry J. “My Personal Religion.” Universalist Friends 35 (Fall–Winter 2000): 22–31, with corrections in 36 (Spring-Summer 2000): 18.

Cadbury, Henry J. Quakerism and Early Christianity. London: Allen & Unwin, 1957.

Cadbury, Henry J. The Character of A Quaker. Pamphlet #103. Wallingford, PA: Pendle Hill, 1959. Also in “Two Strands in Quakerism.” Friends Journal 14, no. 5 (April 4, 1959): 212–14.

Cadbury, Henry J. “My Religious Pilgrimage.” (Notes for talk at Doylestown Monthly Meeting, April 1, 1962.) Unpublished manuscript, 1962. Henry J. Cadbury Papers, Quaker Collection, Haverford College, Haverford, PA.

Cadbury, Henry J. “Quakerism and/or Christianity.” Friends Bulletin 35, no. 4 (1966): 1–10.

Cresson, Os. “Henry Joel Cadbury: No Assurance of God or Immortality” in Boulton, David, ed. Godless for God’s Sake: Nontheism in Contemporary Quakerism. Dent, Cumbria, UK: Dales Historical Monographs, 2006, pp. 85–90.

Duncan, David

Cresson, Os. “David Duncan and the Free Friends of Manchester” in Boulton, David, ed. Godless for God’s Sake: Nontheism in Contemporary Quakerism. Dent, Cumbria, UK: Dales Historical Monographs, 2006, pp. 82–85 & 90.

Duncan, David. ‘Essays and Reviews’. A Lecture. Manchester, UK, 1861.

Duncan, David. Can an Outward Revelation be Perfect? Reflections upon the Claim of Biblical Infallibility. London, 1871.

Isichei, Elizabeth. Victorian Quakers. Oxford: Oxford University Press, 1970.

Kennedy, Thomas. British Quakerism 18601920. Oxford: Oxford University Press, 2001.

Forster, Joseph B.

Forster, Joseph B. On Liberty: An Address to Members of the Society of Friends, 1867. Quoted in Isichei, Victorian Quakers, 30.

Forster, Joseph B. The Society of Friends and Freedom of Thought in 1871, 1871.

Manchester Friend. Ed. Joseph B. Forster, 1871–73.

Free Quakers

Wetherill, Charles. History of the Free Quakers. Washington. D.C.: Ross & Perry, 2002.

Wetherill, Samuel. An Address To those of the People called Quakers, who have been disowned for Matters Religious and Civil. Philadelphia, PA, 1781. Reprinted in Wetherill, History of the Free Quakers, above, pp. 47–49.

Hicks, Elias

Forbush, Bliss. Elias Hicks: Quaker Liberal. NY: Columbia University Press, 1956.

Jacob, Norma. Introducing…Elias Hicks: A Condensation of Bliss Forbush’s Original Biography. Philadelphia: Friends General Conference, 1984.

Holmes, Jesse H.

Holmes, Jesse. The Modern Message of Quakerism. Philadelphia: Friends General Conference, 1912. Also published as What is Truth? Philadelphia: Friends General Conference (undated).

Holmes, Jesse. “To the Scientifically–Minded.” Friends Intelligencer 85, no. 6 (1928): 103–04. Reprinted in Friends Journal 38, no. 6 (June 1992): 22–23. Also published as To the ScientificallyMinded. Philadelphia: Friends General Conference (undated), and A Los Intellectuales. Philadelphia: Friends General Conference (undated).

Holmes, Jesse. “The Quakers and the Sciences.” Friends Intelligencer 88, no. 6 (1931): 537–38.

Holmes, Jesse. “‘Our Christianity’?” Universalist Friends 39 (Fall & Winter, 2003): 15–22.

Stern, T. Noel. “Jesse Holmes, Liberal Quaker.” Friends Journal 38, no. 6 (June 1992): 21–23.

Wahl, Albert J. Jesse Herman Holmes, 18641942: A Quaker’s Affirmation for Man. Richmond, IN: Friends United Press, 1979.

Humanist Society of Friends (Lowell H. Coate)

The Humanist Friend, 1939–1944.

Wilson, Edwin H. Genesis of a Humanist Manifesto, Amherst NY: Humanist Press, 1995.

Linton, John

Linton, John. “A Universalist Group.” Letter to the editor. The Friend. 136 (April 21, 1978): 484.

Linton, John. “A Universalist Group.” Letter to the editor. The Friend. 136 (October 20, 1978): 1315.

Linton, John. “Quakerism as Forerunner.” Friends Journal 25, no. 17 (October 15, 1979): 4–9. Reprinted as Quakerism as Forerunner. Pamphlet #1. London: Quaker Universalist Group, 1979. Also reprinted in Quaker Universalist Fellowship. The Quaker Universalist Reader Number 1: A Collection of Essays, Addresses and Lectures. Landenberg, PA: printed by author, 1986, 1–13.

Linton, John. “Nothing Divides Us.” The Universalist 12 (1984): 16–20.

Littleboy, William

Littleboy, William. The Appeal of Quakerism to the NonMystic. Harrowgate, England: Committee of Yorkshire Quarterly Meeting of the Society of Friends, 1916. Reprinted by the Friends Literature Committee, Yorkshire, 1938, and by Friends Book Centre, London, 1945.

Morgan, Arthur

Kahoe, Walter. Arthur Morgan: A Biography and Memoir. Moylan, PA: The Whimsie Press, 1977.

Morgan, Arthur. “My World.” Unpublished manuscript, 1927. Library, Antioch College, Yellow Springs, OH.

Morgan, Arthur. Should Quakers Receive the Good Samaritan Into Their Membership? Landenberg, PA: Quaker Universalist Fellowship, 1998.

Morgan, Arthur. Search for Purpose. Yellow Springs, OH: Community Service, Inc., 1957.

Morgan, Arthur. “Necessity.” Unpublished manuscript, 1968. Quoted in Kahoe, Arthur Morgan, above.

Morgan, Ernest. Arthur Morgan Remembered. Yellow Springs, OH: Community Service, Inc., 1991.

Wilson, Edwin H. Genesis of a Humanist Manifesto. Amherst, NY: Humanist Press, 1995.

Mott, Lucretia

Bacon, Margaret Hope. Valiant Friend: The Life of Lucretia Mott. NY: Walker and Company, 1980. Reprinted in Philadelphia: Friends General Conference, 1999.

Cromwell, Otilia. Lucretia Mott. Cambridge, MA: Harvard University Press, 1958.

Greene, Dana, ed. Lucretia Mott: Her Complete Speeches and Sermons. NY: Edwin Mellen Press, 1980.

Hallowell, Anna Davis. James and Lucretia Mott. Life and Letters. Boston: Houghton, Mifflin, 1890.

Palmer, Beverly Wilson, ed. Selected Letters of Lucretia Coffin Mott. Urbana: University of Illinois Press, 2002.

Progressive Friends at Longwood

Densmore, Christopher. “Be Ye Therefore Perfect: Anti–Slavery and the Origins of the Yearly Meeting of Progressive Friends in Chester County, Pennsylvania.” Quaker History 93, no. 2 (Fall 2004): 28–46. By courtesy of the author.

Longwood Progressive Friends Meetinghouse, 1855 1940: 150 Anniversary Celebration. Kennett Square, PA, May 22, 2005.

Rowntree, Joseph

Rowntree, Joseph. Memorandum on the Declaration of Christian Doctrine issued by the Richmond Conference, 1887. York, UK, 5th month 10, 1888.

Seeger, Daniel A.

Bien, Peter and Chuck Fager, eds. In Stillness There is Fullness: A Peacemaker’s Harvest: Essays and Reflections in Honor of Daniel A. Seeger’s Four Decades of Quaker Service. Bellefonte, PA: Kimo Press, 2000.

Cresson, Os. “Reviews of Publications on Quaker Nontheism in the 1960s.” (Review #4.) Unpublished manuscript. Online at

Seeger, Daniel A. “Is Coexistence Possible?” Friends Journal 30, no. 12 (1984): 11–14. Also in Quaker Universalist Fellowship. Quaker Universalist Reader Number 1. Laudenberg, PA: printed by author, 1986, 85.

Seeger, Daniel A. The Mystical Path: Pilgrimage To The One Who Is Always Here. Millboro, VA: Quaker Universalist Fellowship, 2004. Online at

Seeger, Daniel A. “Why Do the Unbelievers Rage: The New Atheists and the Universality of the Light.” Friends Journal 56, no. 1 (Jan. 1, 2010): 6–11. Online at

Stanton, Elizabeth Cady

Densmore, Christopher. “Forty-Seven Years Before the Woman’s Bible: Elizabeth Cady Stanton and the Congregational Friends.” Paper presented at the Women’s Centennial Conference, Seneca Falls, NY, November 4, 1995, by courtesy of the author.

DuBois, Ellen. The Elizabeth Cady StantonSusan B. Anthony Reader. Ithaca NY: Cornell University Press, 1994.

Gaylor, Annie Laurie. Women Without Superstition: No Gods—No Masters. Madison WI: Freedom From Religion Foundation, 1997.

Stanton, Elizabeth Cady. Eighty Years & More: Reminiscences 18151897. Boston: Northeastern University Press, 1993.

Stanton, Elizabeth Cady. The Woman’s Bible. Boston: Northeastern University Press, 1993.

Stanton, Theodore and Harriot Stanton Blatch, eds. Elizabeth Cady Stanton As Revealed in Her Letters Diary and Reminiscences, Volumes One and Two. NY: Arno & The New York Times, 1969.

Swayne, Kingdon W.

Swayne, Kingdon W. “Confessions of a Post–Christian Agnostic.” Friends Journal 26, no. 3 (March 15, 1980): 6–9. Also in Quaker Universalist Fellowship. Variations on the Quaker Message. Landenberg, PA: printed by author, 1990, 1–6.

Swayne, Kingdon W. “Universalism or Latitudinarianism?” Universalist Friends 7 (1986): 8–11.

Swayne, Kingdon W. “Humanist Philosophy as a Religious Resource,” in Quaker Universalist Fellowship. Varieties of Religious Experience: An Adventure In Listening. Pamphlet #7. Landenberg PA: printed by author, 1990.

Swayne, Kingdon W. “Universalism and Me—3 Friends Respond.” Universalist Friends 23 (1994): 9–10.

Walker, Claire

Blalock, Heidi. “Remembering Claire.” Collection: The Magazine of Friends School of Baltimore (Spring 2009), pp. 2–5.

Walker, Claire. “Must We Feel Comfortable?” Friends Journal 9, no. 15 (August 1, 1963): 334.

Walker, Claire. “The Anti-Anthros Speak Out.” Friends Journal 22, no. 19 (November 15, 1976): 583–85.

Winstanley, Gerrard

Boulton, David. Gerrard Winstanley and the Republic of Heaven. Dent, Cumbria, UK: Dales Historical Monographs, 1999.

Boulton, David. Militant Seedbeds of Early Quakerism. Landenberg, PA: Quaker Universalist Fellowship, 2005. http://www.universalistfriends .org/boulton.html.

Cohn, Norman. The Pursuit of the Millennium: Revolutionary Millenarians and Mystical Anarchists of the Middle Ages. NY: Oxford University Press, 1970.

Hill, Christopher. The World Turned Upside Down: Radical Ideas During the English Revolution. NY: Viking Press, 1972.

Sabine, George Holland. The Works of Gerrard Winstanley. New York: Russell & Russell, 1965.

Workshop for Non-Theistic Friends (Robert Morgan)

Morgan, Robert M. “Some Surprises For Us?” Friends Journal 22, no. 19 (November 15, 1976): 582–83.

Workshop for Non-Theistic Friends. “Seekers Beyond Tradition.” Friends Journal 22, no. 19 (November 15, 1976): 586–87. Slightly edited version of unpublished report by participants in the Workshop for Non-Theistic Friends held at the Friends General Conference Gathering, Ithaca NY, June 26-July 3, 1976.

Source Notes

  1. Gerrard Winstanley, The Law of Freedom in a Platform, or True Magistracey Restored (1652), in Gerrard Winstanley, The Works of Gerrard Winstanley, ed. George H. Sabine (Ithica, New York: Russell & Russell, 1965), 501–600.
  2. Gerrard Winstanley, The Law of Freedom, 104–05.
  3. Gerrard Winstanley, The New Law of Righteousness, in Gerrard Winstanley, The Works of Gerrard Winstanley, 170.
  4. Jacob Bauthumley, The Light and Dark Sides of God, Or a plain and brief Discourse of the Light side (God, Heaven and Earth) The dark side (Devill, Sin, and Hell) (London: William Learner, 1650).
  5. (a) David Scofield Wilson, In the Presence of Nature (Amherst MA: University of Massachusetts Press, 1978), 92. (b) John Bartram to Peter Collinson, June 11, 1743, William Darlington and Peter Collinson, eds., Memorials of John Bartram and Humphrey Marshall (New York: Hafner, 1967), 164. (c) John Bartram to Benjamin Rush, December 5, 1767, Thomas P. Slaughter, The Natures of John and William Bartram (Philadelphia: University of Pennsylvania Press, 2005) 62.
  6. Charles Wetherill, History of the Free Quakers (Washington. D.C.: Ross & Perry, 2002), 48.
  7. Charles Wetherill, Free Quakers, 32.
  8. Hannah Barnard, in Thomas Foster, An Appeal to the Society of Friends on the Primitive Simplicity of their Christian Principles and Church Discipline; and on Some Recent Proceedings in the Said Society (London: J. Johnson, 1801), 122–23.
  9. Hannah Barnard to William Matthews, September 6, 1802, William Matthews, The Recorder (London: J. Johnson, 1802).
  10. Elias Hicks, in Bliss Forbush, Elias Hicks: Quaker Liberal (NY: Columbia University Press, 1956), 78.
  11. Elias Hicks, in Norma Jacob, Introducing . . . Elias Hicks: A Condensation of Bliss Forbush’s Original Biography (Philadelphia: Friends General Conference, 1984), 19.
  12. David Duncan, ‘Essays and Reviews.’ A Lecture (Manchester, UK: Edwin Slater, 1861), 8.
  13. Friends at the Memorial Hall, Manchester, “Address Adopted by the Friends at the Memorial Hall, Manchester,” The Manchester Friend 2, no. 12 (1873), 190.
  14. Joseph B. Forster, On Liberty. An Address to the Members of the Society of Friends (London: F. Bowyer Kitto and Sutherland: W. H. Hills, 1867), 26.
  15. Joseph B. Forster, editorial, The Manchester Friend 1, no. 1 (1871), 1, italics in the original.
  16. Christopher Densmore, “Be Ye Therefore Perfect: Anti–Slavery and the Origins of the Yearly Meeting of Progressive Friends in Chester County, Pennsylvania,” Quaker History 93, no. 2 (2004), 28–46.
  17. Oliver Johnson, Message during yearly meeting in Waterloo NY, June 3, 1855, in the Proceedings of the Annual Meeting of Friends of Human Progress (Syracuse NY: Evening Chronicle Print, 1855), 5.
  18. (a) Christopher Densmore, “Be Ye Therefore Perfect,” 41. (b) Pennsylvania Yearly Meeting of Friends, Exposition of Sentiments (1853),
  19. (a) Lucretia Mott to Mary P. Allen, June 5, 1877, in Anna Davis Hallowell, ed., James and Lucretia Mott: Life and Letters (Boston: Houghton Mifflin, 1890), 460. (b) Lucretia Mott, “When the Heart Is Attuned to Prayer,” in Dana Greene, ed. , Lucretia Mott: Her Complete Speeches and Sermons (NY: Edwin Mellen, 1980), 302. (c) Lucretia Mott to James L. Pierce, January 15, 1849, in Anna Davis Hallowell, James and Lucretia Mott, 315.
  20. Lucretia Mott, in Edward T. James, Janet Wilson James and Paul S. Boyer, Notable American Women 16071950: A Biographical Dictionary, vol. 2 (Cambridge, MA: Belknap Press of Harvard University Press, 1975), 592–95.
  21. Lucretia Mott, address at annual meeting of the Free Religious Association, June 2, 1871, in Anna Davis Hallowell, James and Lucretia Mott, 551.
  22. Lucretia Mott, conversation with Elizabeth Cady Stanton, 1840, in Anna Davis Hallowell, James and Lucretia Mott, 188.
  23. (a) Susan B. Anthony, “Divine Discontent,” in Lynn Sherr, Failure is Impossible: Susan B. Anthony In Her Own Words (NY: Random House, 1995), notes 17, 20, 6, and 32. (b) Susan B. Anthony, address to Pennsylvania Yearly Meeting of Progressive Friends at Longwood, PA, 1873, in Proceedings of the Pennsylvania Yearly Meeting of Progressive Friends Held at Longwood, Chester County (NY: Baker & Godwin, 1873) 56.
  24. Elizabeth Cady Stanton, Eighty Years and More: Reminiscences, 18151897 (Boston: Northeastern University Press, 1993), 44.
  25. Elizabeth Cady Stanton, “The Pleasures of Age,” speech on November 12, 1885, The Selected Papers of Elizabeth Cady Stanton and Susan B. Anthony, vol. 4, ed. Ann D. Gordon (New Brunswick, NJ: Rutgers University Press, 2006), 459.
  26. Joseph Rowntree, Memorandum on the Declaration of Christian Doctrine issued by the Richmond Conference, 1887 (York, UK, 5th month 10, 1888).
  27. William Littleboy, The Appeal of Quakerism to the NonMystic (Harrowgate, UK: Committee of Yorkshire Quarterly Meeting of the Society of Friends, 1916). Reprinted by the Friends Literature Committee, Yorkshire, 1938, and by Friends Book Centre, London, 1945.
  28. Henry J. Cadbury, Quakerism and Early Christianity (London: Allen & Unwin, 1957).
  29. (a) John Linton, “Quakerism as Forerunner,” Friends Journal 25, no. 17 (October 15, 1979): 4–9. Reprinted as Quakerism as Forerunner, pamphlet #1 (London: Quaker Universalist Group, 1979). Also in Quaker Universalist Fellowship, The Quaker Universalist Reader Number 1: A Collection of Essays, Addresses and Lectures (Landenberg, PA: printed by author, 1991), 1. (b) Daniel A. Seeger, “Is Coexistence Possible?,” Friends Journal 30, no. 12 (1984): 11–14. Also in Quaker Universalist Reader Number 1 (Laudenberg, PA: Quaker Universalist Fellowship, 1986), 85.
  30. Jesse Holmes, “To the Scientifically–Minded,” Friends Intelligencer 85, no. 6 (1928): 103–04. Reprinted in Friends Journal 38, no. 6 (June 1992): 22–23. Also published as To the ScientificallyMinded (Philadelphia: Friends General Conference, undated), and A Los Intellectuales (Philadelphia: Friends General Conference, undated).
  31. (a) Jesse Holmes, The Modern Message of Quakerism, Philadelphia: Friends General Conference, 1912. Also published as What is Truth? Philadelphia: Friends General Conference (undated). (b) Jesse Holmes, “The Quakers and the Sciences,” Friends Intelligencer 88, no. 6 (1931): 537–38.
  32. Jesse Holmes, “‘Our Christianity’?” Universalist Friends 39 (Fall & Winter, 2003): 15–22.
  33. Henry J. Cadbury, “My Personal Religion,” Universalist Friends 35 (Fall–Winter 2000): 22–31, with corrections in 36 (Spring-Summer 2000): 18. For another interpretation of Cadbury’s writings, see Paul Anderson, “Is ‘Nontheist Quakerism’ a Contradiction of Terms?” Quaker Religious Thought 118 (May 2012): 5–24.
  34. Henry J. Cadbury, Quakerism and Early Christianity, (London: Allen & Unwin, 1957), 47–48.
  35. Arthur Morgan, “Universal Brotherhood in Religion,” Friends Intelligencer (October 17, 1953): 558 and 564.
  36. Arthur Morgan, letter, The New Humanist, 6 (May–June, 1933).
  37. (a) Claire Walker, “Must We Feel Comfortable?” Friends Journal 9, no. 15 (August 1, 1963): 334. (b) Joseph Havens, “Christian Roots and Post-Christian Horizons” Friends Journal 10, no. 1 (January 1, 1964): 5–8. (c) Lawrence McK. Miller, Jr., “The ‘Honest to God’ Debate and Friends” Friends Journal 10, no. 6 (March 15, 1964): 124–26. (d) Letters by Howard Kershner, Albert Schreiner and Mary Louise O’Hara in Friends Journal, April 1, May 15 and July 15, 1964. (e) For more on this, see: Os Cresson, “Reviews of Publications on Quaker Nontheism in the 1960s” (unpublished manuscript),
  38. Workshop for Non-Theistic Friends, “Seekers Beyond Tradition” Friends Journal 22, no. 19 (November 15, 1976): 586–87. Slightly edited version of unpublished report by participants in the Workshop for Non-Theistic Friends held at the Friends General Conference Gathering, Ithaca NY, June 26–July 3, 1976, Workshop also described in Robert Morgan, “Some Surprises For Us?” Friends Journal 22, no. 19 (November 15, 1976): 582–83.
  39. John Linton, letter, “A Universalist Group,” The Friend 136 (April 21, 1978): 484. See John Linton, letter, “A Universalist Group” The Friend 136 (October 20, 1978): 1315.
  40. John Linton, “Quakerism as Forerunner,” Friends Journal 25, no. 17 (October 15, 1979): 4–9. Reprinted as Quakerism as Forerunner, pamphlet #1 (London: Quaker Universalist Group, 1979). Also in Quaker Universalist Fellowship. The Quaker Universalist Reader Number 1: A Collection of Essays, Addresses and Lectures (Landenberg, PA: printed by author, 1986), 1–13.
  41. Kingdon W. Swayne, “Confessions of a Post–Christian Agnostic,” Friends Journal 26, no. 3 (February 15, 1980): 6–9. Also in Quaker Universalist Fellowship. Variations on the Quaker Message (Landenberg, PA: printed by author, 1990), 1–6.
  42. Kingdon W. Swayne, “Universalism or Latitudinarianism?,” Universalist Friends 7 (1986): 8–11.
  43. (a) David Rush, “Facts and Figures: Do Quakers Believe in God, and if They Do, What Sort of God?,” in David Boulton, ed., Godless for God’s Sake: Nontheism in Contemporary Quakerism (Dent, Cumbria, UK: Dales Historical Monographs, 2006), 91–100. Also see Mark S. Cary and Anita L. Weber, “Two Kinds of Quakers: A Latent Class Analysis,” Quaker Studies 12/1 (2007): 134–144. (b) David Rush, “They Too Are Quakers: A Survey of 199 Nontheist Friends,” The Woodbrooke Journal 11 (Winter 2002). Reprinted as They Too Are Quakers: A Survey of 199 Nontheist Friends (Millsboro, VA: Quaker Universalist Fellowship, 2003).
  44. (a) Robin Alpern, “Reflections on a Decade of Nontheism Workshops” (unpublished manuscript, 2007), (b) David Boulton, “Nontheism Among Friends: Its History and Theology” (paper delivered at the Quaker Theological Discussion Group meeting at the American Society for Biblical Literature Conference, San Francisco CA, November 2011).
  45. (a) David Boulton, ed., “New Nontheist Friends Network in Britain”, last modified April 27, 2011, (b) quoted in David Boulton, “Nontheism Among Friends.”
  46. David Boulton, ed., Godless for God’s Sake.
  47. Chuck Fager, review, “Godless for God’s Sake: Nontheism in Contemporary Quakerism,” Quaker Theology 13 (2007),
  48. Henry Cadbury (?), “The Separation in the Society of Friends, 1827.” Friends Intelligencer 71, no. 9 (Second month 28, 1914): 129–132. Also published as Henry Cadbury (?), Differences in Quaker Belief In 1827 and ToDay (Philadelphia: Biddle Press, 1914).
  49. Margaret Hope Bacon, Let This Life Speak: The Legacy of Henry Joel Cadbury (Philadelphia: University of Pennsylvania Press, 1987), 26.
  50. Spiritual Nurturance Committee of Quaker Earthcare Witness, “Statement on Unity with Diversity,” BeFriending Creation 26, no. 3 (May–June 2013): 9,

Publications on Quaker Nontheism

Published 19 Jul 2016 by Os Cresson in

This first appeared in Quaker and Naturalist Too (Iowa City, IA: Morning Walk Press, 2014, pp. 135-145). The list is divided between earlier publications (1962-1995), and later publications (1996-2013). Unfortunately some publications have been missed and the list is not being kept up to date. Please send copies of material to be included, or their references, to us at

Earlier Publications (1962–1995)

Allott, Stephen. “Quaker Agnosticism.” The Friends Quarterly 25, no. 6 (1989): 252–58.

Allott, Stephen. “Is God Objective Fact?” The Friends Quarterly 28, no. 4 (1994): 158–66.

Banks, John. “Simply the Thing I Am.” The Friends Quarterly 27, no. 7 (1993): 317–22.

Barbour, Ian G. Science and Secularity: The Ethics of Technology. NY: Harper & Row, 1970.

Boland, James R. “An Agnostic’s Apology.” Poem. Friends Journal 15, no. 13 (July 1/15 1969): 391.

Boulding, Kenneth. “Machines, Men, and Religion.” Friends Journal 14, no. 24 (December 15, 1968): 643–44.

Brayshaw, Maude. “The Search for God.” In Friends Home Service Committee, In Search of God: Some Quaker Essays. London: printed by author, 1966, pp. 5–6.

Cadbury, Henry J. “A Quaker Honest to God.” Friends Journal 10, no. 13 (July 1, 1964): 298–99.

Creasey, Maurice A. Bearings or Friends and the New Reformation. Swarthmore Lecture. London: Friends Home Service Committee, 1969.

Crom, Scott. “Human Experience and Religious Faith.” Friends Journal 11, no. 17 (September 1, 1965): 429–31.

Crom, Scott. “Intellectual Bankruptcy and Religious Solvency (Part I)”. Friends Journal 13, no. 21 (November 1, 1967): 566–68.

Crom, Scott. “Intellectual Bankruptcy and Religious Solvency (Part II)”. Friends Journal 13, no. 22 (November 15, 1967): 599–600.

Crom, Scott. “The Trusting Agnostic.” Comments by Maurice H. Friedman and John H. McCandless, and response to comments by Scott Crom. Quaker Religious Thought 14, no. 2 (1972): 1–39.

Evans, Cadifor. “The Appeal of Quakerism to the Agnostic.” In Friends Home Service Committee, In Search of God: Some Quaker Essays. London: printed by author, 1966, pp. 7–13.

Fuchs, Peter. “A Quaker Wannabe – Maybe.” Friends Journal 41, no. 5 (May 1995): 10–11.

Friends Journal. “The New Atheism” and “The Turning Point.” Editorials. Friends Journal 8, no. 12 (June 15, 1962): 251.

Havens, Joseph. “Christian Roots and Post-Christian Horizons.” Friends Journal 10, no. 1 (January 1, 1964): 5–8.

Holmes, Jesse. “To the Scientifically-Minded.” Friends Intelligencer 85, no. 6 (1928): 103–104. Reprinted in Friends Journal 38, no. 6 (June 1992): 22–23.

Holmes, Margaret. “What Have Quakers to Say to the Agnostic?” In Friends Home Service Committee, In Search of God: Some Quaker Essays. London: printed by author, 1966, pp. 14–20.

Ives, Kenneth H. New Friends Speak: How and Why They Join Friends. Studies in Quakerism 6. Chicago: Progresiv Publishr, 1980.

Ives, Kenneth H. Recovering the Human Jesus. Chicago: Progresiv Publishr, 1990.

Johnson, Eric. “Why I Am an Atheist.” Friends Journal 37, no. 1 (January 1991): 17. Also in Quaker Universalist Fellowship. Variations on the Quaker Message. Pamphlet #201. Landenberg, PA: printed by author, 1991.

Johnson, Eric. “Atheism and Friends.” Letter to the editor. Friends Journal 37, no. 5 (May 1991): 6.

Jones, Robinson. “A Great People to be Gathered.” The Universalist 8 (July 1982): 27–34. Reprinted in Patricia A. Williams, ed. Universalism and Religions.Columbia MD: Quaker Universalist Fellowship, 2007, pp. 162–69.

Lacey, Paul. “The Death of ‘the Man Upstairs’: A Critical Appraisal of the New Theology.” Comments by Chris Downing, J. H. McCandless and Clinton L. Reynolds, and response to comments by Paul Lacey. Quaker Religious Thought VIII, no. 1, issue #15 (1966): 3–36.

Linton, John. “Quakerism as Forerunner.” Friends Journal 25, no. 17 (October 15, 1979): 4–9. Reprinted, Pamphlet #1. London: Quaker Universalist Group, 1979. Also reprinted in Quaker Universalist Fellowship. The Quaker Universalist Reader Number 1: A Collection of Essays, Addresses and Lectures. Landenberg, PA: printed by author, 1986, 1–13.

Linton, John. “Nothing Divides Us.” The Universalist 12 (July 1984): 16–20.

Loukes, Harold and H. J. Blackham. Humanists and Quakers: An Exchange of Letters. London: Friends Home Service Committee, 1969.

Macmurray, John. Search for Reality in Religion. Swarthmore Lecture. London: George Allen & Unwin, 1965. Also published in London by Friends Home Service Committee, 1965, 1969 & 1984.

Mayer, Philip. The Mature Spirit: Religion without Supernatural Hopes. Northampton MA: Pittenprauch Press, 1987.

Miles, Thomas R. Towards Universalism. Pamphlet #7. London: Quaker Universalist Group, 1985. Reprinted in 1994.

Miller, Jr., Lawrence McK. “The ‘Honest to God’ Debate and Friends.” Friends Journal 10, no. 6 (March 15, 1964): 124–26.

Morgan, Robert M. “Some Surprises For Us?” Friends Journal 22, no. 19 (November 15, 1976): 582–83.

Morgan, Robert M. and Claire Walker. “Toward New Concepts of God.” Friends Journal 22, no. 19 (November 15, 1976): pp. 582–87. This includes a brief introduction and the articles listed here as Morgan (1976), Walker (1976), and Workshop for Non-Theistic Friends (1976).

Murphy, Carol. “Friends and Unbelievers.” Friends Journal 11, no. 7 (April 1, 1965): 160–61.

Smith, Bradford. “Divine Law.” Friends Journal 10, no. 13 (July 1, 1964), p. 292.

Smith, Bradford. “The Doubters.” Poem. Friends Journal 11, no. 7 (April 1, 1965): 161.

Swayne, Kingdon W. “Confessions of a Post–Christian Agnostic.” Friends Journal 26, no. 3 (March 15, 1980): 6–9. Also in Quaker Universalist Fellowship. Variations on the Quaker Message. Landenberg, PA: printed by author, 1990, 1–6.

Swayne, Kingdon W. “Humanist Philosophy as a Religious Resource,” in Quaker Universalist Fellowship. Varieties of Religious Experience: An Adventure In Listening. Pamphlet #7. Landenberg PA: printed by author, 1990.

Swayne, Kingdon W. “Universalism and Me—3 Friends Respond.” Universalist Friends 23 (1994): 9–10.

Walker, Claire. “Must We Feel Comfortable?” Friends Journal 9, no. 15 (August 1, 1963): 334.

Walker, Claire. “The Anti-Anthros Speak Out.” Friends Journal 22, no. 19 (November 15, 1976): 583–85.

Williams, Jonathan. My Quaker-Atheist Friend. Poem about Basil Bunting. London: L. and R. Wallrich, 1973.

Recent Publications (1996–2013)

Alpern, Lincoln. “Testimony of a Nontheist Friend,” in Spirit Rising: Young Quaker Voices. Philadelphia, PA: Quaker Press of Friends General Conference, 2010, 219–21.

Alpern, Robin. “Why Not Join the Unitarians?” Universalist Friends, 28 (1997): 23–28. Reprinted in Patricia A. Williams, ed. Universalism and Religions.Columbia MD: Quaker Universalist Fellowship, 2007, pp. 157–62. Also in A Newsletter for Quakers of a Nontheistic Persuasion, Michael Cox, ed., issue 1 (Fall 1996).

Alpern, Robin. “Meeting for Worship: an Opportunity for Being.” Unpublished manuscript, 2006.

Alpern, Robin. “Reflections on a Decade of Nontheism Workshops.” Unpublished manuscript, 2007.

Alpern, Robin. “Atheology”, Spark: New York Yearly Meeting News 40, no. 4 (September 2009). Revised version of Robin Alpern, “What’s a Nice Nontheist Like You Doing Here?” in David Boulton, ed., Godless for God’s Sake: Nontheism in Contemporary Quakerism. Dent, Cumbria, UK: Dales Historical Monographs, 2006, pp. 17–26.

Amoss, Jr., George. “The Making of a Quaker Atheist.” Quaker Theology 1 (1999): 55–62. Also see James and Amoss (2000), below.

Anderson, Paul. “Is ‘Nontheist Quakerism’ a Contradiction of Terms?” In an issue of QRT titled “Quakers and Theism/Nontheism.” Quaker Religious Thought 118 (2012): 5–24.

Arnold, Peter. “Keeping an open mind.” Unpublished manuscript, 2005.

Bates, Paul. “Quaker Diversity.” Talk given at the Frederick Street Meeting, Belfast, Ireland, November 24, 2013,

Boulton, David. A Reasonable Faith: Introducing the Sea of Faith Newtwork. Loughborough, England: Sea of Faith Network, 1996.

Boulton, David. The Faith of a Quaker Humanist. Pamphlet #26. London: Quaker Universalist Group, 1997.

Boulton, David. Gerard Winstanley and the Republic of Heaven. Dent, Cumbria, UK: Dales Historical Monographs, 1999.

Boulton, David. Real Like the Daisies or Real Like I Love You?: Essays in Radical Quakerism. Dent, Cumbria, England: Dales Historical Monographs with Quaker Universalist Group, 2002.

Boulton, David. The Trouble with God: Building the Republic of Heaven, expanded edition. Winchester UK and Washington US: John Hunt Publishing, 2005.

Boulton, David, ed. Godless for God’s Sake: Nontheism in Contemporary Quakerism. Dent, Cumbria, UK: Dales Historical Monographs, 2006 (contributors: Bowen Alpern, Lincoln Alpern, Robin Alpern, David Boulton, Anita Bower, Miriam Branson, Os Cresson, Joanna Dales, David E. Drake, Anne Filiaci, Philip Gross, David B. Lawrence, Joan Lukas, Tim Miles, Gudde (Gudrun) Moller, Hubert J. Morel-Seytoux, Sandy Parker, James T Dooley Riemermann, Elaine Ruscetta, David Rush, Kitty Rush, Jo Schlesinger, Marian Kaplun Shapiro, Wilmer Stratton, Carolyn Nicholson Terrell, Jeanne Warren and Beth Wray).

Boulton, David. “Godless for God’s Sake: Demystifying Mysticism.” The Universalist 77 (June 2006) 14. Reprinted in Patricia A. Williams, ed. Universalism and Religions.Columbia MD: Quaker Universalist Fellowship, 2007, pp. 169–174.

Boulton, David. Who on Earth was Jesus? The Modern Quest for the Jesus of History. Winchester, UK and Blue Ridge Summit, PA: O Books / John Hunt Publishing, 2008.

Boulton, David. “Nontheism Among Friends: Its Emergence and Meaning.” In an issue of QRT titled “Quakers and Theism/Nontheism.” Quaker Religious Thought 118 (2012): 35–44.

Britton, David. “Knowing Experimentally.” Friends Journal 56, no. 10 (October 2010): 5.

Britton, Liberty. “Identity Creation: Nontheist Quaker.” Unpublished manuscript, 2011.

Cadbury, Henry J. “My Personal Religion.” Lecture given at Harvard Divinity School, 1936. Published in Universalist Friends 35(Fall–Winter 2000): 22–31, with corrections in Universalist Friends 36 (Spring–Summer 2001): 18.

Craigo-Snell, Shannon. “Response to David Boulton and Jeffrey Dudiak.” In an issue of QRT titled “Quakers and Theism/Nontheism.” Quaker Religious Thought 118 (2012): 45–50.

Cresson, Os. “Sharing Meeting.” Friends Journal 47, no. 1 (January 2001): 5.

Cresson, Os. “Quaker in a Material World.” Quaker Theology 5, no. 1 (Spring–Summer 2003): 23–54.

Cresson, Os. “Quakers and the Environment: Three Options. Unpublished manuscript, 2005.

Cresson, Os. “Quakers from the Viewpoint of a Naturalist.” Friends Journal 52, no. 3 (March 2006): 18–20.

Cresson, Os. “On Quaker Unity.” Friends Journal 55, no. 7 (July 2009): 5.

Cresson, Os. “Doctrinally Open Membership in the Religious Society of Friends.” Unpublished manuscript, 2010.

Cresson, Os. “Listening and Speaking from the Heart.” Friends Journal 59, no. 5 (May 2013): 5.

Drake, David E. “Confessions of a Nontheistic Friend.” Friends Journal 49, no. 6 (June 2003): 18–20.

Dudiak, Jeffrey. “Quakers and Theism/Nontheism: Questions and Prospects.” In an issue of QRT titled “Quakers and Theism/Nontheism.” Quaker Religious Thought 118 (2012): 25–34.

Earp, Charley. “In Search of Religious Radicalism.” Quaker Theology, no. 11 (2005).

Fager, Chuck. Review of Godless for God’s Sake: Nontheism in Contemporary Quakerism, ed. by David Boulton. Quaker Theology 7, no. 2 (winter 2007).

Friends at Twin Cities Friends Meeting. “Statement on Theological Diversity” Universalist Friends 43 (February 2006) 23. Reprinted as “Theological Diversity Within Twin Cities Meeting”in Patricia A. Williams, ed. Universalism and Religions.Columbia MD: Quaker Universalist Fellowship, 2007, pp. 174–76.

Furry, Susan. “Recognizing That of God in Each Other.” Friends Journal 53, no. 3 (March 2007): 5.

Gjelfriend, George. “Useful Fictions.” Friends Journal 53, no. 8 (August 2007): 19.

Grundy, Martha Paxson. Review of Godless for God’s Sake: Nontheism in Contemporary Quakerism, by 27 Quaker nontheists, ed. by David Boulton. Friends Journal 52 (November 2006): 25–26.

Hoare, Edward. “Time to Speak Out.” The Friend (October 16, 2009),

Holmes, Jesse. “‘Our Christianity’?” Universalist Friends 39 (Fall & Winter, 2003): 15–22.

Hughes, Ian. “Is Quakerism a ‘Religion For Atheists’? Review of Alain De Botton (2012) Religion for Atheists. London: Hamish Hamilton.” Australian Friend 12, no. 6 (June 2012).

Ives, Kenneth H. Some Quaker Perspectives for the Years 2000+. Chicago: Progresiv Publishr, 1996.

James, Edward and George Amoss Jr. “An Exchange: Quaker Theology Without God?” Quaker Theology 2, no. 1 (Spring 2000).

Kuenning, Larry. Review of Speaking of God: Theism, Atheism and the Magnus Image by T. R. Miles. Quaker Religious Thought 29(1) (1998): 42–43.

Lukas, Joan. “What Do I Do in Meeting? The Experience of a Nontheist Quaker.” Unpublished manuscript prepared for forum held at Friends Meeting at Cambridge, May 9, 2004.

Mason, Marcia L. “Journey of a Doubter.” Friends Journal 57, no. 9 (September 2011).

Miles, Thomas R. Speaking of God: Theism, Atheism and the Magnus Image. York, UK: William Sessions, 1998.

Morgan, Arthur. Should Quakers Receive the Good Samaritan Into Their Membership? Landenberg, PA: Quaker Universalist Fellowship, 1998.

Nugent, Patrick J. “Response to Papers on Theism (Just a Little) and Non-Theism (Much More).” In an issue of QRT titled “Quakers and Theism/Nontheism.” Quaker Religious Thought 118 (2012): 51–56.

Reed, Jessica. “Quakerism: Sharing Your Religion.” The Friend (January 20, 2010),

Riemermann, James. “One God at Most, or Two Gods at Least?Unpublished manuscript, 2006.

Riemermann, James. “What is a Nontheist?” Unpublished manuscript, 2006.

Riemermann, James. Mystery: It’s What We Don’t Know. Quaker Universalist Fellowship Pamphlets, 2008. Also in David Boulton, ed., Godless for God’s Sake: Nontheism in Contemporary Quakerism. Dent, Cumbria, UK: Dales Historical Monographs, 2006, pp. 43–51.

Riemermann, James. “Revealing our True Selves.” Paper presented at conference of Nontheist Friends Network, Birmingham, UK, March 2012.

Rush, David. “They Too Are Quakers: A Survey of 199 Nontheist Friends.” The Woodbrooke Journal 11 (Winter 2002). Reprinted as “They Too Are Quakers: A Survey of 199 Nontheist Friends.” Millsboro, VA: Quaker Universalist Fellowship, 2003.

Seeger, Daniel A. “Why Do the Unbelievers Rage? The New Atheists and the Universality of the Light.” Friends Journal 56, no. 1 (January 2010): 6–11.

Seltman, Muriel. Bread and Roses: Nontheism and the Human Spirit. Kibworth Beauchamp, UK: Matador, 2013.

Smith, Steve. “‘Leadings’ For Nontheistic Friends?” Friends Journal 57, no. 1 (January 2011): 22–25.

Stern, T. Noel. “How I Became a Universalist Quaker.” Universalist Friends 37 (Fall & Winter 2002): 21–31.

Vura-Weis, Brian. “Quakers & Non-Theism.” Western Friend (July/August, 2009).

Wise, Julia. “No Religion. Always Practicing Quakerism.” Friends Journal 58, no. 4 (April 2012): 26.

Wright, Michael. “Disagreeing About God.” The Friend (October 18, 2013).

Yagud, Miriam. “The Wrong Silence.” The Friend 169, no. 5 (February 4, 2011): 14.

 Publications by Nontheist Friends Gatherings

 Boulton, David, David Rush, and Kitty Rush. “Minute.” Minute approved by the workshop, “Beyond Universalism: The Experience and Understanding of Nontheism in Contemporary Quakerism,” held at Woodbrooke Quaker Study Centre, Birmingham, UK, January 9–11, 2004. http://www.nontheist Also in “Quaker Non-Theism,” by David Boulton, The Friend (February 20, 2004): 15, and described in “News” by David Boulton, David Rush, and Kitty Rush, Friends Journal 50, no.7 (July 2004): 39.

Conference of the Nontheist Friends Network, 2012. “Minute and Epistle.” Minute approved by “Nontheism Among Friends,” the inaugural conference of the Nontheist Friends Network held at Woodbrooke Quaker Study Centre, Birmingham, UK, March 9–11, 2012.

Conference of the Nontheist Friends Network, 2013. “Minute and Epistle.” Minute approved by “Nontheism Among Friends,” the 2nd annual conference of the Nontheist Friends Network held at Woodbrooke Quaker Study Centre, Birmingham, UK, March 1–3, 2013.

Gathering of Nontheist Friends at Woodbrooke Quaker Study Centre. “Minute and Epistle.” Approved by the workshop, “What Next for Quaker Nontheism,” held at Woodbrooke Quaker Study Centre, Birmingham, UK, February 18–20, 2011.

Workshop for Non-Theistic Friends. “Seekers Beyond Tradition.” Friends Journal 22, no. 19 (November 15, 1976): 586–87. Slightly edited version of unpublished report by participants in the Workshop for Non-Theistic Friends held at the Friends General Conference Gathering, Ithaca NY, June 26–July 3, 1976. Workshop also described in Robert Morgan (1976), above.

Workshop on “Quaker Identity and the Heart of our Faith.” “Minute.” Approved by the workshop, “Quaker Identity and the Heart of our Faith,” held at the Friends General Conference Gathering, Blacksburg VA, June 26–July 4, 2009.

Mediawiki custom temp directory = safe?

Published 18 Jul 2016 by nuclearsugar in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I ran unto an issue when upgrading from 1.24.0 to 1.27.0.

Since I'm on a shared server and have multiple Mediawiki installations, the system temp folder is being shared across them all. This is causing a MWException error on page load. Obviously this will cause problems since temp content of one wiki will overwrite the content of another.

So I've declared a custom temp directory: (Paste into LocalSettings.php)

$wgTmpDirectory = "$IP/images/temp";

While this fixes the issue, I'm not sure it's a secure practice. I see many other people using this fix... But is it safe to have the temp folder PUBLICLY visible?

This documentation is all I can find explaining what is dumped into the temp folder. And here is a bug report of the root issue.

A Message for Future Quakers: Work for All Friends, Not Just Your Friends

Published 18 Jul 2016 by Os Cresson in

Our concern today is the future of nontheism among Friends and the future of Friends in general. It is not surprising that nontheist Friends are evaluating our work since 20 years have passed since we began holding regular Friends General Conference (FGC) Gathering workshops. Social movements often need to be reformed after a generation, even when they have been successful.

Nontheist Friends (NTFs) have been asking to be included among Friends. We have been knocking on the meetinghouse door, even nailing statements to it. I would like us to consider an alternative approach. Let’s assume nontheists are included and go on from there. Let’s simply be Quakers doing outreach to nontheists and to theists who are interested in this outreach.

Because time is limited I am only writing about FGC Friends, leaving for other occasions the important discussion of theist and nontheist Friends in other portions of the Religious Society of Friends (RSoFs). Among Friends, the term “nontheist” refers to a person whose beliefs, experiences or approach to life do not include God, or at least not some particular definition of God. This is an umbrella term that can combine with other more specific descriptors. Thus, there are atheist nontheists, agnostic nontheists, even theist nontheists (e.g. rejecting God as a personality, or accepting God as nature and nothing else). Another detail: the numbers at the end of paragraphs are formed from the page in a printed copy, followed by the number of the paragraph on that page. You may refer to the paragraph numbers when commenting on this paper. [1.3]

I would like us to consider three topics. These are: (1) the current condition of theists and nontheists among FGC Friends, (2) what the Religious Society of Friends can be, and (3) how to get there from here.

The current condition of theists and nontheists among FGC Friends.

There is good news and bad news about the current condition of theists and nontheists among FGC Friends. During the last 20 years our condition has improved markedly. There have been workshops at FGC, Pendle Hill and in other venues, our website and email discussion groups, books and articles, and there was a great debate one year at the FGC Gathering. Recently I have begun to see nontheist Friends mentioned when a writer lists varieties of Quakers. We are accepted as one of the streams that make up contemporary Quakerism. Before, we were closeted, and now we are in the open. [1.5]

Theists have found many ways to accommodate nontheists who are practicing as Friends in their meetings and organizations. We are near enough to God, or are on another path up the same mountain, or are doing God’s will even when we don’t invoke God. Or theists may simply notice that Quaker practice can be accompanied by many different faiths. For Quakers, words about our faith have always been suspect. There is a preference for getting on with being faithful, letting our lives speak. These, then, are some of the ways theist Friends have justified the inclusion of nontheists among Friends. [1.6]

This is all good news, and there is much of it, but there is also bad news, and it is news NTFs have not collectively acknowledged and addressed (although, importantly, many Friends have done so in their individual lives). [2.1]

Unfortunately, we haven’t been doing our homework, or not nearly enough of it. We don’t write reviews of important new books and articles, even ones that are about nontheist Quakers. [Endnote 1] We have not published much writing and there have been few posts on our website. We have done little to make a bibliography of the writings of the NTFs who have gone before us. [Endnote 2] [2.2]

There is more bad news: Friends often fall into the habit of keeping peace in our meetings by not talking about our differences. We have imported into our meetings the political correctness we have learned in society at large. This reticence goes beyond nontheism. For instance, too often Christian Quakers feel they cannot speak in as strongly Christian terms as they would like. We think being diverse means not imposing our views on each other. [2.3]

NTFs as a group have not focused on learning how theist and nontheist Friends speak with each other in religiously diverse communities. Nor have we studied how to write for approval by a diverse community (as in meeting brochures and organization mission statements). We haven’t worked on the effect of our presence as nontheists on the membership practices of a meeting. Or the effect on other Quaker practices such as how we come to a sense of the meeting as we conduct business while worshiping. We haven’t studied the implications for the relation of faith and action, and the question of how to take faith-based action in a religiously diverse meeting. [2.4]

We have not organized our responses to questions that always come up when theists consider accepting nontheists, and we have not written a manual for newcomers. [Endnote 3] We haven’t studied child rearing for nontheist parents in a theist society. We haven’t looked at First Day School for children who may be theist or nontheist or undecided, and whose parents may care passionately about this topic. [2.5]

NTFs have not reached out to other religious naturalists and to the wider secular movement (which is very active these days). We have not even done this in Quaker schools. NTFs as a group haven’t reached out to Quaker organizations that may be struggling to serve both theists and nontheists. Individuals have done some of what we have not done as a group, but we haven’t collected the stories of these efforts so as to make them available to others. [2.6]

We have held few meetings for NTFs as a whole other than during the FGC Gatherings and online. At FGC we have generally failed in our effort to involve theist Friends in our events. Year after year the same people lead our FGC workshops and serve on the NTFs Planning Group. The Planning Group does little but plan for the FGC Gathering (mainly the workshop, afternoon program, literature tables, and an evening interest group session). We have tried to have the function of an organization without the structure, but one result has been a lack of leadership. [2.7]

Finally, there are three fundamental paradoxes that our group is not addressing. (1) NTFs call attention to a particular set of religious beliefs (or experiences and approaches) while we assert that religious belief doesn’t matter, at least not in a lot of the ways Quakers have thought it does matter. This seems to be contrary to some Friends’ traditions such as tracing Quaker testimonies to specific religious beliefs. [2.8]

(2) Calling attention to our nontheism risks setting us apart from some theist Friends. This emphasis on a particular set of beliefs stands out: we are the only FGC Affinity Group organized around religious belief. [Endnote 4] The test of how we are doing is not just what we say, but also how other Friends respond. There are many ways to be a nontheist among Friends, many ways to balance the various concerns of everyone involved. [3.1]

(3) Perhaps surprisingly, there is no such thing as nontheist Quakerism. All the questions raised by the presence of NTFs among Friends are simply examples of more general questions about the effect of being religiously diverse Friends communities. The issues raised are not specific to nontheism, which is a proper field of study but is not unique to Quakers (such as when we look at nontheism’s implications for ethics, or at the rights of nontheists in traditionally theist societies). Thus, nontheist Quakerism is two fields: a Quaker one about how to be diverse, and a nontheist one about how to be human as part of a natural world and nothing else. [3.2]

In sum, in my opinion we NTFs are not doing our homework, are not cooperating with theists, and are not addressing contradictions inherent in our approach.

What the Religious Society of Friends can be.

Let us imagine what the RSoFs can be; that is, how we would like to live as Quakers.

I envision a Society in which Friends warmly embrace each other, differences and all. I see each of us speaking in our own religious language, speaking of what is nearest our hearts. This means shifting from a concern about not giving offense to a concern about not taking offense. [Endnote 5] [3.5]

Let us be an example of a religion that accepts people for whom religion is about lives rather than the supernatural. But we do not want a Society limited to these people: let us be an example of love and cooperation between naturalists and supernaturalists. [3.6]

I see Friends uniting in common purposes and practices, while giving each other the latitude to hold our own beliefs about all this. We can all worship together. Questions of membership can be decided through a clearness process that looks at how the community and the applicant are together, how they are practicing as Friends. This shift of focus away from details of belief will allow our meetings to be as diverse as the communities from which we draw our members. [3.7]

Let us make clear that we are committed to all Friends, whatever their accompanying beliefs, experiences and approaches. Until a loving, trusting community has been established, speaking bluntly about nontheism risks misunderstandings and the breakdown of the relationship. [3.8]

Let us be an example of unity amid religious diversity within our meeting communities, and our yearly meetings, and in our relations with Friends of other traditions than our own, and in organizations that draw on a variety of Friends. Let us also be an example of unity in our relations with people of other religions and world views. [3.9]

Let us represent the heart of the early Quaker message, rather than features of the message that resulted from their particular time and place. For example, let us ask each other what canst thou say, without requiring that we say particular things in particular ways. Let us do as the early Friends did when they said they were joining in silence to wait upon the Lord. And then let our lives speak in many voices. [4.0]

How to get there from here.

How can we live as we would like to see Quakers live, while addressing the current condition of NTFs?

This would mean clearly demonstrating NTFs’ commitment to Quakers as a whole and our acceptance of theist Friends. Instead of defending the view that NTs can be Quakers, let’s assume we are Friends and start a program of Quaker outreach to nontheists, and to theists who are interested in nontheists. [4.2]

This could be part of a broader effort to work for Friends’ religious diversity. We can reach out to all Friends, not just to Friends who are like us. As a leader of the women’s movement said, “It is not women’s liberation; it is women’s and men’s liberation.” [Endnote 6] NTFs need to create a movement that all Friends can be part of. [4.3]

Including nontheists among Friends can be important to all Friends if it is done as part of the inclusion of people who hold a variety of personal religious beliefs. It is wonderful to see a Friend support the seeking of another Friend, especially when it is not what the supporting Friend is seeking. It is in my interest to have people not like me in the RSoFs, as long as we are a religiously diverse community. Since people of diverse views can be good neighbors, and can be Friends, working for all of us is enough for me. I am happy to leave questions about what sort of views are better for humankind, for history to decide. [4.4]

Theists will be more likely to support our efforts if we are working for a diverse RSoFs rather than promoting a particular view (especially if it is one they do not share). This approach would make it easier to accommodate the threat NTFs present for some TFs. [Endnote 7] Of course, many NTFs do not see a threat in what we are doing, but that may not be the way our efforts affect others. NTFs can sympathize: imagine how we would feel if an organization named “Theist Quakers” started up next door to us, declaring the beauties of theism. [4.5]

I suggest we continue doing what NTFs have been doing, and we do more, but we do it as representatives of Friends in general. For instance, we can contact secular humanist student clubs in Quaker colleges, but do so as Quakers rather than as nontheist Quakers. Whether the people doing the outreach are theists or nontheists isn’t important; that they represent a RSoFs that is diverse and welcomes both theists and nontheists is important. [4.6]

During recent years we have had trouble getting our Nontheism Among Friends workshop accepted. Let’s propose workshops on topics of interest to Friends in general and let’s have theists as co-leaders. Workshops that meet every morning are the heart of the Gathering experience and it is best that our workshop efforts appeal to all, rather than focusing on just one point of view and one that is easily interpreted as antithetical to the views of other Friends. At FGC Gatherings, let’s do our specifically nontheist work during the non-workshop activities: the afternoon program, staffed Drop In Center, literature tables, evening interest group, and our handout in the packet all Gathering attenders receive. Deeper, on-going personal interactions can be arranged if necessary. All this can be done as Quaker outreach to nontheists rather than as nontheist Friends promoting their views. It will also help to have NTFs as active participants in planning FGC Gatherings, and in the selection of workshops. [4.7]

We can work with other Friends by focusing on Friends practices rather than Friends faith. It is the experience of NTFs that we can practice together as Friends even as we differ in the faiths with which we explain these practices. This contrasts with the Friends tradition of citing faith as the source of our practices, but it is in the Friends tradition of focusing on lives rather than the assertions of faith that accompany these lives. [5.1]

NTFs have learned much about being Quakers in religious diverse communities. This is something all Friends need, not just NTFs. For instance, it is not necessary to bite our tongues or only speak in a language we have in common. Political correctness has its place in society at large but we can do better in a loving, trusting community. Instead of trying to be politically correct, we can each listen from the heart and speak from the heart. You speak in your own terms and I listen, open to your meaning without being distracted by your particular words or the theory that accompanies them. I translate into my terms and reply to the heart of what you are saying, speaking in my own terms while you listen with an open heart. For example, you may ask what does God require of us, and I hear you ask what is required of us, or what does the environment require of us, or our highest principles, or neighborly love. [Endnote 8] [5.2]

There are also inclusive ways of writing for the possible approval by a religiously diverse meeting. One way is to allow expressions of particular religious views but only where bracketed by statements of our commitment to being inclusive of Friends of all views. [Endnote 9] [5.3]

Membership practices in religiously inclusive meetings can be based on participation rather than beliefs. There is still a clearness process for both the meeting and the applicant, but the focus is on the practice of being Quaker. Beliefs are how we talk about our practices. [5.4]

We noticed several paradoxes in the current efforts of NTFs (that nontheism is wonderful but unimportant, that our efforts tend to set us apart just as we are asking to be included, and that nontheist Quakerism isn’t a useful category in the first place). The solution to these paradoxes is to work on becoming a successfully diverse religious society, and to include outreach to nontheists among other examples of Quaker outreach. Some of the paradoxes will remain, such as calling attention to what separates us while working for unity, but this will be easier to manage if we start with a project all Friends can support. [5.5]

As a practical matter, working for an inclusive RSoFs is a good way to work for NTF goals. This is only true if being inclusive means we each can express our own views, and if nontheism is explicitly included among the diversity of Quaker faiths. [Endnote 10] This approach relies on the happy fact that Quaker practices can be accompanied by a great variety of Quaker faiths. We see this in our own meetings, and we see it as we look across the RSoFs today, and as we look back through Quaker history. [Endnote 11] This approach also relies on the fact that the wider society that surrounds the RSoFs is becoming more nontheist. This will mean that nontheism is gradually viewed more positively and that more nontheists will be interested in the RSoFs. It also means that I can honestly say it is as much in my interest that you are a theist Friend (who is committed to inclusion) as that you are a nontheist Friend. [5.6]

We have been talking about nontheist issues as the goal, with a secondary interest in methods for bringing theist and nontheist Friends together. A more certain path to the goal of including nontheists among Friends, and even to a goal of ultimately having secular views replace supernatural views, is to turn what we have been doing around: let the goal be the practices that support religious diversity among Friends, with nontheist concerns as something that comes along with it. We can be confident of this since the tide is with us. Just as evolutionary biology will replace creationism, naturalism in religion will eventually be established as an alternative to supernaturalism. [6.0]

What are the tasks that NTFs would like to work on collectively in the near future? I suggest there are three main sets of tasks:

(1) Work for an inclusive RSoFs, becoming ever wiser in how to be religiously diverse. This means looking at the implications for every form of Quaker practice including how we speak with each other and how we write for meeting approval. [6.2]

(2) Reach out to newcomers, both within our Quaker communities and outside our Society. This includes people who don’t even know we exist as an alternative, and it includes theists who are not nontheists but want to learn about them and their place in the RSoFs. This involves many special forms of outreach such as to children and their parents, and to college students, and to people in the secular movement, and to nontheists in other religions. People doing outreach are not limited to those holding the same views as the people they are reaching out to. [6.3]

(3) Support writing and workshops and conferences about Quakers and nontheism and religious diversity. Describe Quaker practices in nontheist terms. Study the history of nontheist Friends. This writing will be useful to those working on the other two sets of tasks. [6.4]

Let me be explicit about the changes I am asking for in the RSoFs, and the changes I am asking for in what NTFs are doing. I am asking for the RSoFs to be inclusive of diversity in religious faith. Not just reluctantly, and based on keeping quiet about our views, but explicitly and wisely and joyfully inclusive. An inclusion that is obvious to visitors. This may mean changes in membership practices if membership is meant for people who hold particular beliefs. It may mean changes in how we describe ourselves if it is stated or implied that particular beliefs are favored. I am not asking Friends to change their faiths (unless exclusion is part of that faith). [6.5]

Finally, I offer the following eight changes in what NTFs are doing: (1) Cast NTF work in the context of a concern for all Friends. Before going on to questions specifically about nontheism, NT activists must make it clear that we are working for a RSoFs open to all, not just open to people with whom we agree. I am not asking NTFs to hide their views. In a trusting meeting community NTFs will be able to speak openly about their views. However, as we work to create such a community, there will be limits on how we express our views when speaking to people, such as newcomers and children, who don’t know how we listen and speak with each other in a trusting community. [6.6]

(2) Do our homework. Answer the questions people always ask. Find out what has gone on before us. Study how to speak with each other and how to write for meeting approval. Reach out to secular humanists who might be interested in Quakers, and to Quakers who are interested in these questions. Reach out to Quaker organizations, especially those with some particular concern for nontheists (such as organizations involving scientists). Learn to be a religiously diverse community, and share what we have learned. [6.7]

(3) Work on this with theist Friends. This becomes even more important as militant atheists are drawn to the RSoFs and express their views in ways that tend to drive theists away. [Endnote 12] Cooperation will become easier as NTFs make it clear that we are working for all Friends, not just NTF Friends. [6.8]

(4) Find more effective leadership. We do not have an organization, partly because that could increase concern about our motives and emphasize our separateness from other Quakers. Leadership has been provided by the NTFs Planning Group (whose members are self-selected), but it does almost nothing but support work at each year’s FGC Gathering. Most of the people who are active in this group are the same people who have been doing it for over 10 years. Finding new leadership may mean forming an organization. [7.1]

(5) We need to look for other opportunities for NTFs to come together in addition to the FGC Gathering. Friends for LGBTQ Concerns benefited when they started holding a mid-winter conference. Another option is to come to the FGC Gathering a few days early, which QUF used to do for their annual meeting. [7.2]

(6) Change our name. “Nontheism Among Friends,” or “Nontheist Friends,” has been the label of many of our workshops and our affinity group and our afternoon series of events at the FGC Gathering. It is also the name of our website and e-mail discussion groups. Sometimes the word “nontheism” or “nontheist” will help focus the outreach, but only if it is in the context of a broader effort, as in “Quaker Outreach to Nontheists.” A recently suggested alternative is “Friends Open to Nontheism.” [Endnote 13] [7.3]

It can help to consider the situations in which the name for an NTF organization can appear. Here are three examples: (1) In presentations we make in our home meetings. There we have a lot of latitude. (2) As the name of the sponsor of events reaching out to nontheists, and to theists interested in the questions raised by the presence of nontheists among Friends. Here we have not yet established a trusting community committed to each other. (3) As the name on a banner across a lobby where we have a display inviting all Friends to stop in, or as a name on a flyer handed out where there are crowds of Friends inviting them to join our movement. If we want all Friends join with us in working for a diverse RSoF that explicitly welcomes nontheists, we need to celebrate all Friends, not just our particular variety of Friends. [7.4]

Thus, there is an ever changing balance of our concern for the expression of our own views and our concern for the effect our words have on our listeners. This can shift dramatically depending on the circumstances. You may be speaking with a Friend with whom you have an open and trusting relationship, and then someone new to Friends joins the conversation. The newcomer may be misled by blunt talk about your views, not realizing that you cherish the presence of other views. [7.5]

Also consider that there are many ways to present your views. For instance, you might explain your position and why it is important, but if your listeners know you well it may be enough to simply offering a few words such as “Of course, some Friends would express themselves differently.” When people speak of humans and animals, I wait for the opportunity to speak of humans and other animals. Once your position is known a slight hint can serve in place of many words; sometimes Friends turn to me expecting a reaction and I just smile. By not objecting I am making a larger point about our commitment to each other. Friends need to learn to adjust our expression of personal views to the requirements of the moment. [7.6]

(7) Another step could be to join a group already working for Quaker religious diversity, such as the Quaker Universalist Fellowship (QUF). We might find a setting in which to do some or all of our work. At least it’s worth considering. As a group QUF is not an advocate for any particular variety of Quakerism but perhaps they could be an umbrella group for Friends working on specific outreach to Buddhists, Moslems, nontheists, scientists, Latin Americans or whomever. [8.0]

(8) I ask Friends to consider engaging in a clearness process regarding our collective activities as NTFs. Do we need an organization? What are the reasons to have one, or not to have one? What would its mission be, and what tasks would it take on? How would it be organized? [8.1]

A clearness committee could do some of its work by telephone or email, but I expect meeting in person would be necessary. It could seek outside advice. It might set up a larger process to seek further clarity on the issues that rise up. This could lead to a report and recommendations for the NTFs Planning Group. [8.2]

In summary, unity among Friends can be based on practice rather than belief or experience. The goals of NTFs can be met even as we each talk about what we are doing in our own characteristic ways. Working for unity amid the diversity of Friends is a good way to work for nontheism among Friends. To be Friends we do not need to deny our different ways of being Quaker, although there are ways to do this that are more or less supportive of each other in special circumstances. There is a lot more NTFs could be doing as an organized group to bring this approach to Friends, but we lack leadership. It would be wise for us to seek ways to study these issues. [8.3]

All this will be a lot of work, but it is what is required of us as we consider the next step in the history of nontheists among Friends. Happily, many of the nuts and bolts of how to be an inclusive religious community are already known to us. [8.4]

Friends have the opportunity to become an example to the world of how a religious community can be diverse and inclusive. However, we must not do it half-heartedly. It will only happen if we do it explicitly, and wisely, and joyfully. [8.5]

I would like to hear your views on how theist and nontheist Friends can move into their future together. You are invited to send me your comments.


  1. For instance, we NTFs in the U.S. have not published reviews of Dan Seeger’s “Why Do the Unbelievers Rage? The New Atheists and the Universality of the Light.” (Friends Journal 56, no. 1, January 2010, pp. 6–11), Doug Gwyn’s But Who Do You Say I Am: Quakers and Christ Today (Pendle Hill Pamphlet #426, 2014), Ben Pink Dandelion’s Open for Transformation: Being Quaker (London: Quaker Books, 2014), and the entire issue of Quaker Religious Thought titled “Quakers and Theism/Nontheism” (# 118, 2012). All these publications are directly relevant to nontheism in contemporary Quakerism.
  1. As an example of this effort I recently reviewed articles on nontheism in Friends Journal during the 1960s and 2010s. This is posted on A list of all publications that I know of, from 1962 to 2013, by or about NTFs is in my Quaker and Naturalist Too (Iowa City: Morning Walk Press, 2014), pp. 135-145. The list is incomplete and is not being kept up to date. It also is posted at
  1. Questions Quaker nontheists often hear include: What is the difference between nontheism and atheism? If we don’t hold a common faith what will unite us? If we accept people holding any belief does this mean we would accept everyone who applies? How do NTFs interpret basic Quaker terms such as religion, worship, leading and discernment? How do we come to a sense of the meeting when some of us are seeking God’s will and others are just doing whatever they want? Without God, are Quakers simply social activists? Does this deny Quaker history?
  1. See the lists of FGC groups at and
  1. This was offered in 2009 by Callie Marsh of West Branch Monthly Meeting in Iowa.
  1. Ruth Bader Ginsburg quoted in Notorious RBG: The Life and Times of Ruth Bader Ginsburg, by Irin Carmon and Shana Knizhnik, NY: Harper Collins, 2015, p. 72.
  1. This threat can be particularly hard for those who came to Quakers specifically because of their emphasis on a direct experience of God, or for those who have grown up in a Friends community that defined their group in terms of God. They seem to be losing the RSoFs they have loved. It will be easier for some of these Friends if NTFs are obviously working for all Friends.
  1. For more see my “Listening and Speaking from the Heart” (Friends Journal 59, no. 5, May 2013, p. 5), and an expanded version with an anthology in my Quaker and Naturalist Too, pp. 12-25.
  1. A useful example is the Statement on Unity with Diversity by the Spiritual Nurturance Committee of Quaker Earthcare Witness (online at QEW has found that this general approach to being diverse is useful in relations of Quakers who are theists and nontheists, Christians and nonChristians, and new age eco-spiritualists and environmental scientists.
  1. This is true for a wide variety of NTF goals. For instance, it is even a good approach if ones goal is a world free of any involvement with the supernatural.
  1. This is an example of the larger truth that people lead good lives while holding different faiths. Conversely, ones faith does not guarantee good behavior and changing a person’s faith is not an effective way to change the rest of their behavior. This is also why many people emphasize lives instead of talk. Jesus understood that the best way to love God is by loving our neighbors. It is said that when an elderly Quaker was asked what he believed he replied, “Ask my neighbor.”
  1. At the 2016 FGC Gathering, Friends from Arizona reported several instances of theists leaving their “Experiment with Light” groups because of how they were treated by militant atheists.
  1. This felicitous phrase was offered by Betsy Baertlein of Iowa City Monthly Meeting.

Block Storage: More Space to Scale

Published 12 Jul 2016 by DigitalOcean in DigitalOcean Blog.

At DigitalOcean, our vision has always been to build a platform that allows developers to run their infrastructure at scale without getting in their way. To date, the top feature request from our community has been to have the ability to add additional disk space to their Droplets. Today, we are excited to introduce Block Storage to make that possible.

Over the past few months, our product and engineering teams have been working to deliver a storage service that is as simple and intuitive as our compute, the Droplet. With the help of more than 15,000 beta users, we have designed an experience that is focused on reducing friction and allowing you to scale with ease.

With Block Storage, you can now scale your storage independently of your compute and have more control over how you grow your infrastructure, enabling you to build and scale larger applications more easily. Like the Droplet, Block Storage is SSD-based and has an easy-to-use API. Our pricing model is straightforward, based only on capacity: $0.10/GB per month. There are no complicated formulas necessary to determine your overall cost.

Let's get to some details:

Highly Available and Redundant

Block Storage stores data on hardware that is separated from the Droplet and replicated multiple times across different racks, reducing the chances of data loss in the event of hardware failure.

Scalable and Flexible

You can easily scale up and resize your Storage volumes from 1GB to 16TB and move them between Droplets via the control panel or API. As your storage needs grow, you can expand an existing volume or add more volumes to your Droplet.

Reliable and Secure

All the data is encrypted at rest and transmitted to the Droplets over isolated networks.

Multiple Regions

You can create Block Storage volumes right now in NYC1 and our new SFO2 region. FRA1 is next in line and will be available in the coming weeks. We're working quickly to expand to other regions. More updates to come.

Update: As of Monday, August 1st Block Storage is now live in FRA1! Stay tuned for more updates as it rolls out across our regions.

Getting Started

When you log in to your dashboard, you will see a new Volumes tab that has an overview of your volumes:

Volumes overview

You will also be able to add volumes right from a Droplet's page:

Volumes tab on Droplet page

Once you have a volume attached to your Droplet, use the simple copy and paste instructions displayed on your dashboard to configure it. For more information on working with your Block Storage volumes, read our community tutorials about Linux filesystems and tools and our introduction to Block Storage.

Like all DigitalOcean resources, you can also automate provisioning using our brand new volumes API or doctl, the official DigitalOcean command-line client.

Thank You

The whole team at DigitalOcean would like to thank all the beta testers who helped shape Block Storage and everyone who continues to provide feedback and offer suggestions. There is so much more we are excited to share with you in the future as we continue to strive to simplify infrastructure.

Mediawiki : Issues when installing Visual Editor Extension with Parsoid

Published 12 Jul 2016 by Samy-DT in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I'm working on Ubuntu 16.04 with mediawiki 1.27.0. I'm trying to install Parsoid and Visual Editor. I followed the documentation to the letter. However, after finishing the installation, I have the first error message listed in the troubleshooting part of the documentation:

Error loading data from server: HTTP 500. Would you like to retry?

I tried all that is proposed to check the error.

  1. curl is installed and works because the code below return a response.

    curl -I -L http://wikirct/api.php

  2. I do not have any rewrite in my Apache configuration.

  3. I think, but maybe I'm wrong, that my /etc/mediawiki/parsoid/settings.js is well set (see below).

parsoidConfig.setMwApi({ prefix: 'wiki', uri: 'http://wikirct/api.php', domain: 'wikirct' });

Any help would be appreciate.

Announcing SFO2: Hello Again San Francisco!

Published 11 Jul 2016 by DigitalOcean in DigitalOcean Blog.

Great news: SFO2 is now live! It will be our second West Coast datacenter joining SFO1, one of our most popular. By adding a second datacenter to the region, we are enabling developers to build out more resilient infrastructure. Creating redundancy by scaling across multiple datacenters in the same region enables more robust applications.

When architecting for high-availability, replicating your data and being able to failover from one datacenter to another can keep application downtime to a minimum even in the face an unlikely outage. SFO2 opens up the possibility for our users in the region to design more fault-tolerant infrastructure.

SFO2 comes with our latest hypervisor design, and the datacenter is equipped with 40GbE networking. Built with Block Storage in mind, the feature will be available here on day one of general availability.

Offering both new features and greater reliability, we think this one of our most exciting datacenter launches yet.

Deploy your first Droplet in SFO2 today!

MAME 0.175 Arrives and the Emularity Follows

Published 30 Jun 2016 by Jason Scott in ASCII by Jason Scott.

Just a quick update on the whole “JSMESS is now just something MAME does and so we should theoretically be able to update the Internet Archive’s emulation quickly” front.

It works.

MAME, that lovely thing, went from version 0.174 to 0.175 yesterday. It was pointed out to me pretty soon after the program dropped. Taking notes for later instructions, I began a process of compiling the new versions of the in-browser emulators for the Internet Archive. I used a “weighted library” approach, where the emulator with the most individual items in it (that would be the ZX Spectrum, at a svelte 20,000 items) gets compiled first, and then the next largest set of emulated items, and so on. There are roughly 700 emulator drivers on the Emularity at the Archive, but only roughly 30 have more than 1 item.

So, what this means is that within about 90 minutes of being informed about MAME 0.175, the Emularity on the Internet Archive was upgraded to match for 99 percent of all the items. 

The other hundreds of emulator drivers look to take about 12 hours in total to compile and install, but it’s pretty much 100% automatic, so it’s not taking a lot of time.

So consider that process doing extremely well.

But more than that, it’s definitely going to be a case of pushing for more and more people to contribute to the MAME project, because as proven here, if someone makes wholesale improvements to emulation support for something, the Archive will reflect it within 30 days. If the word comes down that the new thing is so amazing it can’t wait, it’s possible to declare that driver to be done immediately and updated to an in-beta version. Really, this is when I hope the advancements come. I hope it’s just a case that really talented programmers looking to make the world better just don’t know how easy it is to contribute to MAME now, and that it’s in github and open licensed, and once they find out, amazing optimizations and expanded work will make itself known.

We can hope! But until then… upgrading is definitely a snap now. Enjoy the new versions of everything.

Update on the NRA Trademark Complaint

Published 29 Jun 2016 by DigitalOcean in DigitalOcean Blog.

Yesterday, network access was taken down to which was hosted on our service by a customer, an intermediate platform provider. We want to let you know what happened.

On June 23rd, we received notice from the NRA's counsel that was infringing NRA's trademarks. The NRA demanded that we remedy the problem by removing the material. In response, we followed our standard procedures for trademark infringement notices and informed the customer immediately of the NRA's demands. We provided more than 5 days for the customer to respond to or resolve the issue. We also spoke to the customer on several occasions to inform them that we intended to take action on the trademark claim. They chose not to remove the violating material. Accordingly, our Trust & Safety team restricted network access to their Droplet, which caused an outage to all of their user's websites hosted on that Droplet. In less than 2 hours of the outage, the customer was able to address the trademark notice, and network access was immediately restored.

DigitalOcean followed procedures that help protect us from having to resolve what can be complicated disputes between third party rights holders and our customers regarding IP issues. In this case we should have given greater care to the customer's voice and their right to engage in parody. In retrospect, we believe that the website identified in the NRA's takedown notice was not a trademark infringement but was instead protected by the First Amendment. We at DigitalOcean champion freedom of speech and the free and open web.

Going forward, we will be working closely with our legal counsel to review our Trust & Safety procedures so we can make better decisions. We are committed to providing our customers with the best level of service and supporting their rights and freedoms. That is our responsibility as an infrastructure service provider and one that I take very seriously.


Ben Uretsky
Co-Founder & CEO

A Showcase for a Terrible Monitor

Published 28 Jun 2016 by Jason Scott in ASCII by Jason Scott.


Some time ago, I wrote about the amazing efforts to bring a realm of accuracy to emulation, especially arcade machine emulation. The work was (and is) amazing, but one problematic aspect was that the main components for this work in MAME were for the Windows version only, with plans to make it more cross-platform down the line.

The fact that it is down the line and work has been done and we’ve improved the turnaround on the compile time from “new version of MAME released” to “the new version of MAME is running at the Internet Archive” meaning that we can finally put up a nice example of this wrinkle in emulating the vintage experience.

So, by visiting this item on the Archive, you can boot up an arcade machine that is blurry, has ghosting, is slightly curved, and has bleed through some of the pixels and an obvious set of scanlines and reflection….


Seriously, though, this is incredibly important news. It’s a living demonstration of the hurdles and considerations of ’emulating’ older technological experiences. It’s not all ultra high-definition happiness and razor-sharp graphics over surround sound blasted through bluetooth speaker bars. It’s about capturing components of these games that are coming from a different realm than just what the code had to say about it. Between 50 and 80 milliseconds after an emulation is working, people come out to say “it’s not the same, it doesn’t have _____” where _____ is an ever-more-picky set of attributes that makes the experience of the game “real” to them and which they think ruins the entire emulation if the attribute is not right there.

Bear in mind that the potential additional settings for these monitors being emulated are many times more complicated than in this demo, and that the higher the resolution, the better – because now you’re not just emulating the pixels themselves, but the actions and effects around those pixels.

Welcome to the cutting edge of the cutting edge of the blurry edge.


The game I chose for this particular demo is its own nutty story: Crazy Kong. As recounted in The Secret History of Donkey Kong article, and not in Wikipedia any more, Donkey Kong was actually programmed by an outside party for Nintendo. (It was designed by Nintendo, for sure.) This same outside company went on to do other games you might know, like Zaxxon and Congo Bongo. Part of this is that Crazy Kong is not a bootleg of Donkey Kong but a legit license.

It’s also terrible, and wasn’t supposed to be in the US, but then with the skyrocketing success of Donkey Kong, it ended up here in bootleg form.

So, for me personally, Crazy Kong brings back memories of being one of those games shoved into bowling alleys, pizza places, and in shifty locations where the purchase of drugs heavily overrode the quality of the arcade game experience. It seems only right, then, that the slightly crappy monitor panorama be bestowed upon this emulated game, brought up from the depths. I know that’s how I experienced it so many years ago, and you can experience it now.

Some notes: The resolution on this web-based emulation is much higher than the usual games in the Internet Arcade, mostly to really bring out the qualities of the monitor properties. This might slow on some machines, or annoy with some different setups out there. But setting the whole thing to fullscreen will probably make it better for you. As is still the case, Firefox tends to run it better than Chrome and both better than Microsoft Edge (for now) and Safari. You can also always go back to the non-CRT emulation on the site to compare.

A huge amount of applause to the MAME development team for this years-long effort to emulate this important aspect of the vintage experience, and a thanks to everyone who helped make it run at the Internet Archive.

Now get blurry.

Bad formatting of search result with MediaWiki 1.26.3

Published 27 Jun 2016 by stenci in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I just upgraded my company's internal wiki and now the search results appear badly formatted.

The image below shows the H2 containing the text Page text matches on the right of the page, below the Page title matches. When the page shrinks or expands the search results move to the left below the title matches (as in the image) or to the right below the Page text matches text. The Page text matches text always stays up there and becomes invisible if the page shrinks too much.

Is it a problem with MediaWiki 1.26.3?

Did I mess something up during the upgrade?

How do I fix it?

enter image description here

MediaWiki  1.26.3
PHP        5.5.35
MySQL      5.5.48-37.8
ICU        4.2.1

The Fundamental Kickstarter Film Incompatibility

Published 26 Jun 2016 by Jason Scott in ASCII by Jason Scott.

(This is being crossposted between my weblog and my kickstarter campaign for my three documentaries currently in production.)

So, Kickstarters are now simply “part of the landscape” of filmmaking, just like it became part of the landscape of an awful lot of things out there which were previously cases of passing the hat, sinking personal cost, or otherwise having to squeeze blood out of the social network’s stone. I’ve heard countless rough plans that get a Kickstarter thrown into the mix like some sort of financial MSG that will paper over the small cracks here and there and get the intended show (or product, or event) on the road.

So, in the years hence, I’ve seen Kickstarter used for dozens of films, including a good bushel of ones that I’ve backed in some small or large way. And I have something entirely unhelpful to report:

Film Kickstarters almost always end in heartbreak.

Now, let me be clear, I don’t mean they don’t get finished. They most certainly do, to the vast majority. Before I switched over almost exclusively to the “digital download” option for kickstarters, I built up a pretty tidy set of Blu-Ray and DVD sets with the names of the documentaries I backed (I almost always back documentaries exclusively) and those things are done, done, done. And well made! Enjoyable.

But what almost always seems to happen is that down in the clutch, at that point where the films are somewhere in the twilight zone between final mixdown and the copies (digital or physical) fly out into the world, there’s a rapid breakdown of communication and happiness between the backers and the creators. Almost every time.

I don’t think I can solve this problem, per se, but I can mention it and mention what I’m doing, which is likely not going to work for anybody else in this situation.

Pulling my long-dormant mass communications degree from decades-old muck, I’ll say that films in the digital era are subject to a few properties that make them very different than, say, music albums or software programs. This especially comes into play with the concept of “release”.

It’s a given that in the digital world we live in, a thing that’s a bitstream that is somewhere in the Internet is officially all over the Internet. This is both delightful (the file can go everywhere) and to some, terrifying (the file can go everywhere). This property is out there and it is permanent – no amount of coming up with idiotic gatekeeping streams or anti-copying measures are going to stop a file in the wild from being a file in the wild everywhere. (Unless it’s boring or broken.)

With music albums, you can release what counts for “singles” now – single .mp3 files of one song on the album, maybe the one you want heavily rotated or available. You don’t have the full album out there, and you get to still choose when the whole thing goes online. (A couple album kickstarters I’ve backed have released singles before release, for example.) And with software, there’s always “demos” that you can put out, which let you play the first level or some aspect of the program without it all being out there. (Some entities can be lazy and just “tie off” the content, which means it’s trivial to unlock and get the full version, but that’s the lazy group’s fault, not the fault of the nature of what’s being done.)

But with films, you kind of have to do an all-or-nothing deal. You throw the movie out into the world, or you don’t. You can argue about the bonus features and the packaging, but the central X minutes of film are not something easily put out as a “single” or in a “demo mode”.

Oh, sure, you can have trailers, and selected scenes released, but that’s not the same as releasing the whole movie, at least to many backers. It’s out or it’s not.

Therefore, in that moment when the film is nearly done, and the backers who have so generously given money to see the film hit that point are waiting, the filmmakers find themselves seeking some level of professional distribution. And if you want old-school “waiting for this internet to go away”, you definitely are going to find a lot of that in professional distribution.

So right then, in that critical point which should be a celebration, is when there is awful heartbreak. All true examples:

And so on, through many iterations and variations.

The thing is, I think the patient may be terminal – I think in that period between “oh man, we have a movie” and the movie hits hands, there’s so much going on in the way of ensuring the content is paid for, not duplicated, not out of the control of the people who want to get recompense for the finished effort. But at the same time, the number of folks who are expecting it at the first few seconds of availability can be significant and large.

I’ve seriously watched this so many times, it’s almost become an expected milestone for me when these projects wind down into “finished”. But for the backers who are only backing that particular film, it can seem a horrible shock that the film got shown at Maybe-Get-Your-Film-Sold Fest instead of online-debuted to the backers only. Or the aforementioned physical-comes-after-online orders. Or any of the other pitfalls.

There’s several solutions. They’re all pretty crazy. I’m trying one myself.

As each of the documentaries I’m working on are finished, I’m releasing them online as pretty much fast as possible. I’ll make sure the backers have access to everything. I’m not going to play games with holding stuff back.

The physical, deluxe editions will have components of the physical products that will make them interesting and enjoyable on their own, but not controlled by being able to see or not see the movies and the content. I am working on them as separate, involved endeavors.

But I’m nuts. I don’t like the whole “sign your work away to a distributor” thing, and my particular project is so over-time that I feel very beholden to getting it into hands the second it’s out there. It’s also my 4th (through 6th) rodeo; I’m happy to change things up.

But my contention stands: Films are difficult things to not get through a kickstarter without broken hearts. I don’t know how to walk it back, and I don’t know what people can do, other than be super educating at the start of a campaign so backers (and creators) are not heartbroken at the end.


Atari and Arcade Kickstarters To Back

Published 25 Jun 2016 by Jason Scott in ASCII by Jason Scott.

I’m going to suggest two kickstarters you might consider backing.

The first is a consumer hardware thing: The folks at Dream Arcades, who I interviewed for my own documentary, have a new easy-to-use emulation station that they’re making available. As of this writing, the Kickstarter is at about 25%. It’s not for everyone – not everyone wants to spend a few hundred bucks on a professional-grade setup for playing old games. But if you think that it might be nice to have something that “just works”, then I can tell you I’ve toured this business, inspected the work they do, and interviewed the owner and employees about their outlook and approach to making something that sits in the home and office and works nicely. They make a nice thing, and this set of “Retro Consoles” is more of that. So back it if you’d not heard of it and decide you might want one, because they’re offering a nice discount via the Kickstarter.

(There’s a set of people who responded to this kickstarter by saying “I could do this so much cheaper using a [roll of toilet paper and a ham radio and a hacked Parker Brothers Merlin].” and yes, you probably could. You’re also the kind of person who does the oil change yourself and wouldn’t call Geek Squad if you were trapped under a boulder. I get that. It’s not something you want. But it’s a nicely made thing if you do.)

Nolan Bushnell
The second kickstarter warms my heart because it’s for episode 2 of a documentary that I was pleased even saw the light of day, much less start to achieve the road to being a mini-series: 8-Bit Generation Episode 2: Easy to Learn, Hard to Master.

With dozens of interviews conducted, many in-depth, I knew just from talking to the filmmakers over the past couple of years that they were hoping to have made the whole thing a mini-series, and now they were struggling to make just one episode. They decided to do just that episode on Commodore, and the resulting work definitely came out, and I saw it, and have a copy. It happened!

So the fact they’re moving on with an Episode 2 means that they are still trying to achieve the dream of a full miniseries, which is fantastic, because they have so much good material in it.

As of this writing, it’s at 50%, and that’s slightly troubling, because you think this would be a slam dunk. But there we are, and so if people want to see some truly unique historical interviews see the light of day as well-produced episodes, now’s your chance.

Anyway, there you go. I mention stuff like this on my twitter account, but it’s quite obvious that between non-linear timelines, spam, and who knows what else, something a person says on Twitter is no longer really guaranteed to reach an audience, so we’re back to weblog-land. And that reminds me: More entries to come!


Published 24 Jun 2016 by addshore in Addshore.

So, biggest turnout at a UK referendum with 72.2%, we have only had 3 though.

It was so close, 27.8% didn’t vote and thus 34.7% of the UK population wanted to remain and 37.5% wanted to leave. The pie chart really emphases this.

brexit pie

As for comparing the 2 referendums, the vote to join the EEC in 1975 saw 17.3 million vote join with only 8.4 million against.
With the BrExit a similar number (in the scheme of things) wanted to remain with 16.1 but with a whopping 17.4 wishing to leave.

Interestingly the turnout for the referendum to join the European Communities in 1975 and the BrExit referendum both had a higher turnout than any European Union Parliamentary election. The highest turnout for the EUP elections was in 1994 with 49.4%.

Let’s see what happens over the next 5 years!

Internal Server Error; enable debugging provides no additional information

Published 19 Jun 2016 by jww in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I'm working with Mediawiki 1.25.1 on CentOS 7.2 running Apache 2.4.3. I'm working with Mediawiki 1.26.4 on CentOS 7.2 running Apache 2.4.6. I am experiencing an Internal Server Error after editing a page and submitting it:

enter image description here

There is no additional information in /var/log/httpd/error_log. According to Manual:How to debug, I added the following to LocalSettings, and then restarted Apache:

$ sudo tail -7 LocalSettings.php
# Added by JW for debugging (JUN 2016). Keep commented unless needed.
error_reporting( -1 );
ini_set( 'display_errors', 1 );


Adding $wgShowExceptionDetails, error_reporting and friends does not provide any additional information that we can find. error_log is clean, access_log is clean. We enabled error_log = php_errors.log, but the server does not produce a php_errors.log. We have no idea where else to look.

We ran mysqlcheck my_wiki --auto-repair --user=... --password=... and it reports no errors. apachectl configtest immediately returns Syntax OK. However, the problem persists.

We also performed an upgrade to MediaWiki 1.26.4 (from 1.25.1) in hopes of fixing this internal server error with no joy. It still persists, and I still cannot get additional information about it.

QUESTIONS: How does one really enable debugging information? How can we gather more information about the problem plaguing this server?

(Please provide actionable items in response to the question; and please don't provide off-site links telling us to try some of the stuff on some other page that may or may not work. And please provide answers for the specific question that was asked, and not other questions that were not asked.)

Here's the most recent "useless error message" from error_log after upgrading to MediaWiki 1.26. Its from the tail of /var/log/httpd/error_log when a request was submitted to update a wiki page.

[Fri Oct 28 21:05:04.456126 2016] [suexec:notice] [pid 1053] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Fri Oct 28 21:05:04.457260 2016] [:notice] [pid 1053] ModSecurity for Apache/2.7.3 ( configured.
[Fri Oct 28 21:05:04.457283 2016] [:notice] [pid 1053] ModSecurity: APR compiled version="1.4.8"; loaded version="1.4.8"
[Fri Oct 28 21:05:04.457290 2016] [:notice] [pid 1053] ModSecurity: PCRE compiled version="8.32 "; loaded version="8.32 2012-11-30"
[Fri Oct 28 21:05:04.457295 2016] [:notice] [pid 1053] ModSecurity: LUA compiled version="Lua 5.1"
[Fri Oct 28 21:05:04.457299 2016] [:notice] [pid 1053] ModSecurity: LIBXML compiled version="2.9.1"
[Fri Oct 28 21:05:04.495066 2016] [auth_digest:notice] [pid 1053] AH01757: generating secret for digest authentication ...
[Fri Oct 28 21:05:04.632965 2016] [mpm_prefork:notice] [pid 1053] AH00163: Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips configured -- resuming normal operations
[Fri Oct 28 21:05:04.632994 2016] [core:notice] [pid 1053] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'

A couple of folks have commented about the startup messages. I did not realize it was abnormal since Apache always did it. We recently made the following change to address it (but it did not help out "no information" problem):

# diff /etc/httpd/conf.d/mpm_prefork.conf.bu /etc/httpd/conf.d/mpm_prefork.conf
< StartServers       1
< MinSpareServers    1
< MaxSpareServers    5
< ServerLimit       10
< MaxClients        10
> StartServers       4
> MinSpareServers    4
> MaxSpareServers    8
> ServerLimit       32
> MaxClients        32

None of us are professional admins, so we don't know if its enough to resolve the issue. We are a bunch of free and open software developers who run a web server and wiki to help users.

Here is the "maximum brevity" /etc/php.ini:

# cat /etc/php.ini | egrep -v '(^;|^\[)' | sed '/^$/d'
engine = On
short_open_tag = Off
asp_tags = Off
precision = 14
output_buffering = 4096
zlib.output_compression = Off
implicit_flush = Off
unserialize_callback_func =
serialize_precision = 17
disable_functions =
disable_classes =
zend.enable_gc = On
expose_php = On
max_execution_time = 30
max_input_time = 60
memory_limit = 128M
error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT
display_errors = Off
display_startup_errors = Off
log_errors = On
log_errors_max_len = 1024
ignore_repeated_errors = Off
ignore_repeated_source = Off
report_memleaks = On
track_errors = Off
html_errors = On
variables_order = "GPCS"
request_order = "GP"
register_argc_argv = Off
auto_globals_jit = On
post_max_size = 8M
auto_prepend_file =
auto_append_file =
default_mimetype = "text/html"
doc_root =
user_dir =
enable_dl = Off
file_uploads = On
upload_max_filesize = 2M
max_file_uploads = 20
allow_url_fopen = On
allow_url_include = Off
default_socket_timeout = 60
cli_server.color = On
date.timezone = "UTC"
pdo_mysql.cache_size = 2000
SMTP = localhost
smtp_port = 25
sendmail_path = /usr/sbin/sendmail -t -i
mail.add_x_header = On
sql.safe_mode = Off
odbc.allow_persistent = On
odbc.check_persistent = On
odbc.max_persistent = -1
odbc.max_links = -1
odbc.defaultlrl = 4096
odbc.defaultbinmode = 1
ibase.allow_persistent = 1
ibase.max_persistent = -1
ibase.max_links = -1
ibase.timestampformat = "%Y-%m-%d %H:%M:%S"
ibase.dateformat = "%Y-%m-%d"
ibase.timeformat = "%H:%M:%S"
mysql.allow_local_infile = On
mysql.allow_persistent = On
mysql.cache_size = 2000
mysql.max_persistent = -1
mysql.max_links = -1
mysql.default_port =
mysql.default_socket =
mysql.default_host =
mysql.default_user =
mysql.default_password =
mysql.connect_timeout = 60
mysql.trace_mode = Off
mysqli.max_persistent = -1
mysqli.allow_persistent = On
mysqli.max_links = -1
mysqli.cache_size = 2000
mysqli.default_port = 3306
mysqli.default_socket =
mysqli.default_host =
mysqli.default_user =
mysqli.default_pw =
mysqli.reconnect = Off
mysqlnd.collect_statistics = On
mysqlnd.collect_memory_statistics = Off
pgsql.allow_persistent = On
pgsql.auto_reset_persistent = Off
pgsql.max_persistent = -1
pgsql.max_links = -1
pgsql.ignore_notice = 0
pgsql.log_notice = 0
sybct.allow_persistent = On
sybct.max_persistent = -1
sybct.max_links = -1
sybct.min_server_severity = 10
sybct.min_client_severity = 10
bcmath.scale = 0
session.save_handler = files
session.use_cookies = 1
session.use_only_cookies = 1 = PHPSESSID
session.auto_start = 0
session.cookie_lifetime = 0
session.cookie_path = /
session.cookie_domain =
session.cookie_httponly =
session.serialize_handler = php
session.gc_probability = 1
session.gc_divisor = 1000
session.gc_maxlifetime = 1440
session.bug_compat_42 = Off
session.bug_compat_warn = Off
session.referer_check =
session.cache_limiter = nocache
session.cache_expire = 180
session.use_trans_sid = 0
session.hash_function = 0
session.hash_bits_per_character = 5
url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry"
mssql.allow_persistent = On
mssql.max_persistent = -1
mssql.max_links = -1
mssql.min_error_severity = 10
mssql.min_message_severity = 10
mssql.compatability_mode = Off
mssql.secure_connection = Off
tidy.clean_output = Off
soap.wsdl_cache_limit = 5
ldap.max_links = -1

Configuring apache2 for mediawiki

Published 18 Jun 2016 by Tommy Pollák in Newest questions tagged mediawiki - Ask Ubuntu.

For mediawiki to work on apache2 a file /etc/apache2/conf/mediawiki is required. Where can I find it?

Teams: Work Better Together

Published 14 Jun 2016 by DigitalOcean in DigitalOcean Blog.

At DigitalOcean we know that it takes teamwork to build and ship great things to the world. Our own products wouldn't exist without passionate, hard-working teams collaborating to create easy-to-use experiences for developers.

A year ago we launched Teams on DigitalOcean to "better support teams of developers and companies working on large-scale and established applications." The first iteration focused on developer teams managing larger, more complex, production systems by organizing all of their infrastructure under one roof, with one invoice, and no shared credentials.

Since then, over 30,000 teams have been created, with some of those teams having hundreds of members. We felt this was a good first step, and we were interested to hear what you had to say about how we could help your teams work better. The feedback we received was resounding:

We are happy to announce that, as of today, we've made it even easier to work with your teams using DigitalOcean. These improvements make scaling team collaboration simple.

One account, multiple teams

Team members are no longer restricted to membership in a single team. With a single login, everyone can now be a part of up to 10 teams.

Easily switch between teams and your personal account

There is now an easy-to-use dropdown for switching between your personal account and teams, located at the top-right of the control panel.

Switching teams

Better invitations & team management

It's easier than ever to invite members of your team. We added Gmail support, so you can connect and quickly invite your existing team in just a few clicks. Team owners can now see which team members have turned on two-factor-auth, and we've added better search and sorting functionality to help really large teams easily manage their members.

New role for "billing"

Lastly, for teams who need someone to make sure everything keeps running smoothly, there is a new "billing" role within a team, which grants access to your account's billing settings but not Droplets.

Keep the ideas coming

Thank you to everyone who has used Teams and provided feedback. We hope these improvements help make it a little easier for you to build and ship great things. Please keep the feedback coming. We would love to hear from you!


Published 14 Jun 2016 by mblaney in Tags from simplepie.

Merge pull request #453 from mblaney/master

This release fixes an IRI parsing bug reported recently. It also

Referral program, reloaded

Published 6 Jun 2016 by Pierrick Le Gall in The Blog.

Spending money on expensive advertising campaigns to recruit new customers, for, is an unpredictable return on investment. Because relies on the satisfaction of its existing customer base, as being the most important selling point, it has been decided to spend our advertising budget on rewarding our existing customer base for introducing new customers in the shape of their friends and colleagues.

To this end a referrer who successfully introduces a new customer, who actually subscribes, will have their reward increased from a free one month extension of their subscription to a free six month extension. In other words just find two new customers for and you earn a full year for free. The new user has the incentive that instead of receiving 13 months for the price of 12 they will receive 14 months.

New convenient feature: you can easily copy your referral code or signup link

New convenient feature: you can easily copy your referral code or signup link

To start with the referral program, open your Piwigo on page [Administration > My account > Manage > Referrals]. You can also read details on our blog post written in 2011.


Published 2 Jun 2016 by mblaney in Tags from simplepie.

Merge pull request #450 from mblaney/master


Best practice for temp directory used by MediaWiki?

Published 1 Jun 2016 by jww in Newest questions tagged mediawiki - Webmasters Stack Exchange.

This question appears to be related to File system permission for MediaWiki uploads? Tightening permissions has lead to "Error creating thumbnail" messages:

enter image description here

MediaWiki has a page on the temp directory at Manual:$wgTmpDirectory, but the manual does not discuss it in a security context.

Here's our current setting:

$ sudo grep Tmp /var/www/html/w/LocalSettings.php
$wgTmpDirectory     = "{$wgUploadDirectory}/tmp";

Our /etc/php.ini also has a setting for upload_tmp_dir, but its not clear to me why MediaWiki is not using it. I'm guessing it has something to do with security, but again, the MediaWiki manual does not discuss it.

What is the best practice for setting the temp directory for MediaWiki?

Introducing Our Bangalore Region: BLR1

Published 30 May 2016 by DigitalOcean in DigitalOcean Blog.

Today we are excited to announce the launch of our first datacenter in India and our twelfth globally. Developers can now launch Droplets in our newest region, Bangalore (BLR1)!

Our community has been requesting an India region for a while now, and we're thrilled to announce that it is now finally available. We will continue to offer a single pricing plan across all of our datacenters worldwide, including Bangalore, with SSD cloud servers starting at $5 USD per month.

Our goal is to empower developers and software companies around the world to build amazing things, and our robust, affordable, and simple infrastructure is making the cloud more accessible than ever. Today, India is home to the fastest growing ecosystem of startups and entrepreneurs, with approximately 4,000 startups launching this past year. With the number of software developers throughout India expected to grow to over 5 million by the year 2018, this region is poised to unleash a tremendous amount of innovation in the next decade. We want to be there to support every startup to grow and succeed.

We're focused on making it easier than ever before for startups and teams of software developers from India, and around the world, to deploy and scale their applications. We are excited to see what we can build together in Bangalore.

No need to wait any longer. Spin up a Droplet in BLR1!

Aretinos and Friends at the Odd Fellow

Published 29 May 2016 by Dave Robertson in Dave Robertson.


File system permission for MediaWiki?

Published 28 May 2016 by jww in Newest questions tagged mediawiki - Webmasters Stack Exchange.

We are an open source project and we have an Apache web server with a MediaWiki 1.25. Its a stand alone installation, so we don't have the complications of shared hosting a varying userids. After 20 years or so and a few migrations we had a fair amount files owned by "no user" and "no group".

We cleaned up the file system permissions by effectively setting permissions on files and directories to root:apache, where owner was entitled with rw and group was entitled with r. It was not haphazard; rather we identified missing permissions with commands like find /var/www -nouser, find /var/www -nogroup, find /var/www \! -user root and find /var/www \! -group apache.

We also identified two special cases: root:root and apache:apache and reset the permissions. root:root was reset because it meant the file was not accessible to the web server. apache:apache was reset because it meant a high risk component had write permissions.

Problem: We managed to break MediaWiki uploads and thumbnails, which I believe was caused by the apache:apache reset. It is mostly OK because admins use the feature and we know what caused the break. I'm now trying to correct the break, but I can't determine what the file system permissions are supposed to be. Searching is revealing the application level permissions for items like LocalSettings.php and $wgGroupPermissions. And the MediaWiki Architecture docs don't discuss it.

Question: What are the filesystem permission supposed to be for a MediaWiki installation in a non-shared configuration?

For completeness, "non-shared" means the "single user" use case from the cited What permissions should my website files/folders have on a Linux webserver? The VM is dedicated to us, and there's a single Apache, MediaWiki and MySQL user (MediaWiki appears to piggy-back on Apache, so there are only Apache and MySQL users).

We are not in a shared hosting environment with different instances of Apache and MediaWiki running under different user contexts or with different file permissions for each distinct subscriber.

Here is the exception when I attempt to upload a file. I used to be able to upload it when filesystem permissions were "fast and loose":

[724f5260] /wiki/Special:Upload MWException from line 1856 of /var/www/html/w/includes/filerepo/file/LocalFile.php: Could not acquire lock for ''


#0 /var/www/html/w/includes/filerepo/file/LocalFile.php(1148): LocalFile->lock()
#1 /var/www/html/w/includes/upload/UploadBase.php(715): LocalFile->upload(string, string, boolean, integer, array, boolean, User)
#2 /var/www/html/w/includes/specials/SpecialUpload.php(476): UploadBase->performUpload(string, boolean, boolean, User)
#3 /var/www/html/w/includes/specials/SpecialUpload.php(195): SpecialUpload->processUpload()
#4 /var/www/html/w/includes/specialpage/SpecialPage.php(384): SpecialUpload->execute(NULL)
#5 /var/www/html/w/includes/specialpage/SpecialPageFactory.php(582): SpecialPage->run(NULL)
#6 /var/www/html/w/includes/MediaWiki.php(267): SpecialPageFactory::executePath(Title, RequestContext)
#7 /var/www/html/w/includes/MediaWiki.php(566): MediaWiki->performRequest()
#8 /var/www/html/w/includes/MediaWiki.php(414): MediaWiki->main()
#9 /var/www/html/w/index.php(41): MediaWiki->run()
#10 {main}

Here are the users contexts for the running web server.

$ sudo ps aux | egrep -i '(apache|http|media|wiki)'
root       127  0.0  1.2 552216 13424 ?        Ss   01:35   0:12 /usr/sbin/httpd -DFOREGROUND
apache    7318  0.5  2.5 564804 27140 ?        S    11:57   0:05 /usr/sbin/httpd -DFOREGROUND
apache    7346  0.3  2.4 565124 25548 ?        S    11:58   0:03 /usr/sbin/httpd -DFOREGROUND
apache    7351  0.6  3.4 574220 36580 ?        S    11:58   0:05 /usr/sbin/httpd -DFOREGROUND
apache    7477  0.1  1.3 554088 14012 ?        S    12:10   0:00 /usr/sbin/httpd -DFOREGROUND
apache    7487  0.9  2.9 571148 30632 ?        S    12:11   0:00 /usr/sbin/httpd -DFOREGROUND

The MediaWiki 1.27 migration is planned. We are holding off until we get the permissions correct.

Thousands More Hip-Hop Mixtapes, Why So

Published 23 May 2016 by Jason Scott in ASCII by Jason Scott.

A few more thoughts on this one.

A lot of people stopped by when the word about the Hip Hip Mixtape Collection got around. They stopped by this little site, and then hopped over to the main collection, and they’ve been having a great old time.

When tens of thousands of people swing through a new thing, you get variant opinion, and if you’re really super double-lucky, you get some discussions way down there that are rather interesting on a “well, few people were ever going to talk about that” way.

Here are those, based on what I read:

Let’s address those, plus a few other things.

Why doesn’t this guy monetize this! 

Because I work for a non-profit that’s a library and archive, and we don’t monetize stuff like this. We don’t put up ads and we don’t put up click-throughs or pop-ups or demands for cash. It’s actually heartening to get these sorts of comments, because it means they’ve probably never heard of the Internet Archive before, and woo-hoo, new patrons! The more people who hear about the Archive for the first time, the better the world is for everyone. So anyway, no monetization/financial schemes behind this, sorry. (Some wanted to invest.) I’ve learned there are sites that do ad-supported distribution of these mixtapes, and they have all sorts of barriers and clickthroughs to ensure you see the ads. We are not them, that’s not what we do over at the archive.

A Bunch of these Tapes are Fakes/Crap. 

So, I came into this thing like I do a lot of things – go out and acquire whatever I can find and pump it basically automatically into thousands of items (you don’t think I’ve listened to these things in any great amount, do you?). As a result, it’s been a learning curve to find what’s in there. And what I learned is that there’s a wide spectrum of tapes out there, and that Sturgeon’s Law applies quite readily.

There are tapes that are cool amateur productions (created by a small crew or by someone trying to break into the business or get their voice heard), tapes that are kind of promotional items (like, they drop them into the world so word about the artist gets far and wide, usually done by some professional organization) and then there’s DJ mixes, where they do intense remixes of music to showcase their talents. Oh, and then there’s DJ mixes that are basically just a bunch of mp3s thrown together. As we’re finding those or get told about them, they go down. There’s nothing creative or new there (except maybe the cover art). The world is not bettered by them – I won’t miss them. So it’ll take a little while for this all to wring out, but it’ll happen.

Aw Man, it’s Only Post-2000 Stuff.

There’s definitely a lean towards the present with these mixtapes, probably a function of how I’m getting them, from online collections. There’s a few that predate 2000, but those are going to be from cassette tapes, and I’ve not yet stumbled on the Elephant Graveyard of old hiphop mixtapes from cassette. (I’ve got collections of rave tapes, and other 1980s and 1990s artifacts, of course.) I think it’s just a matter of time – after this current pipeline dries up, I’ll start trying to get us to host older and older stuff. How well that goes is up to the people out there – like everything else on the archive, it’s a matter of folks reaching out or giving good pointers or suggestions. I might stumble on things myself but it’s not guaranteed. As it is, the current collection is low-hanging fruit, and some of it is rotten and some of it is very fresh. But I definitely am not sitting on some hidden pile of pre-2000 stuff and going “nah, too historic”.

A few other thoughts

The most intense part of this whole thing was that I had to write this crazy ecosystem of around 15 scripts that deal with a whole pile of contingencies with the tapes. These scripts will fix ingested files, verify they’re what they say they are, reconfigure cover images so they’re in the right order, and add automatic metadata where possible. I actually have directories that drain into other directories that then drain into other directories, and then scripts do automatic evaluations all the way around, and then upload. It’s a terrible contraption but the results are generally OK. I then have to write scripts that crawl through the stuff and clean up what went there, and the result is what you see.

The result of this scriptology is that I’ve learned even more about dealing with odd ingestions that will be reflected on other collections as I go, i.e. the console demos collection I’ve been adding, which does all sorts of crazy robot stuff on combination .zip/.rar/whatever stuff from all sorts of sources. It sort of works! It’ll make things easier in the future! Everyone wins!

And finally – I realize that I am just stumbling backwards into this mixtape thing. It got along quite well without me or the Internet Archive for decades. It doesn’t “need” us anymore than many subcultures “need” us – but my hope is that the appearance and ease-of-access of these tapes will foster both spread of the best of what’s out there, and bring more people to the site to check out all the other things we’re hosting. I’m due someone to come in and lecture me on the “right” way to do all this and what it all “means”, and I’m up for that conversation. What I do know is that tens of thousands of listens are already on the site, with a few thousand more listens every day so whatever it is we’re doing, we’re doing it right for somebody out there. Let’s keep doing that.

And finally.

If you only have one album from this whole collection you want to be told to listen to, if you want just one single tape to somehow magically consolidate all the thousands and thousands of works on the site into one single item, well, ladies and gentlemen, your humble curator must point you in a single direction:

Yes, that’s right, I’m betting the house on Hamburger Helper: Watch The Stove, a 5-song EP mixtape of rap and hiphop, even sort of a ballad, about Hamburger Helper. Hey come back

Sure, you’re going to scoff, but over the course of this mixtape, you will have your eyes open to the myriad feelings and deep emotions of Hamburger Helper, and you too will sympathize with Helper as he explains how the world simply can’t do without this delicious mix. And if there’s one caveat, one life motto you will walk away, it’s to never take someone’s Helper. Just… don’t do it.

Enjoy the tapes. And yes, if you have leads on good additions to the collection, hit me up.


The Elixir of concurrency

Published 23 May 2016 by Carlos Fenollosa in Carlos Fenollosa — Blog.

Elixir is a fairly young language that was born when José and a few Rails developers tried to create a modern language optimized for concurrent, distributed, lightweight processes

They wanted a modern Ruby-like syntax with a well-tested process manager, the Erlang VM. The result is Elixir, defined as a dynamic, functional language designed for building scalable and maintainable applications, a correct but vague affirmation which doesn't do justice to its power and elegancy.

I recently compared the move to Elixir from Python as a similar leapfrog to moving to Python from Java. It feels like something new, modern, powerful, with killer features that you don't want to renounce to.

In Python I found a REPL, list comprehensions, a super clean syntax and decorators. Elixir brings lightweight supervised processes, pattern matching, a fully functional programming language, pipes and a terrific build tool: mix

If you've never written functional code, the jump is significant. I took a Scala course a couple years ago and I've needed almost two full weeks to write production code in Elixir. The language is young, Stack Overflow is of no help —no kidding, that is a big deal—, and there are few libraries in Github.

A small community also comes with some upsides: people are more motivated and willing to help, centralized tools like forums and IRC channels are still manageable, and you may even suggest changes to the language for upcoming versions.

What is Elixir for?

I had a middle school teacher who said that you can't define something by stating what is't not. However, in programming, mentioning use cases which are not suitable for the language is a good way to start.

Elixir is probably not the first choice for single core software: math calculus, CPU-intensive apps or desktop applications. Since it's very high level, systems programming is also out of the picture.

Elixir is great for web applications, standalone or using the Phoenix framework —Elixir's Rails—. It really shines for building highly scalable, fault-tolerant network applications, like chats, telecommunications or generic web services.

Why is that? Thanks to the Erlang VM, processes are really tiny, each one is garbage collected with a low latency, they communicate by sending location-independent messages over the network using the VMs (you can run result = Machine2.Module.function(params) on Machine1), and spawning and managing these processes is effortless thanks to some of its abstractions.

Finally, Elixir's basic modules also shine: Plug and Router for managing HTTP requests, Ecto for relational databases and ETS and Mnesia for distributed in-memory databases.

Many recommend Elixir if only for Phoenix, but I found that for most backend applications it is enough to use Plug and Router. Phoenix is impressive but I believe it's a mistake to jump right into it without trying the base modules first, so my recommendation for beginners is to hold on Phoenix until you really need it.

Elixir's novelty, the pipe operator, is a fantastic approach to working with state in a functional manner. Instead of running readlines(fopen(user_input(), "r")).uppercase().split(), try the more readable user_input |> fopen("r") |> readlines |> uppercase |> split.

It is a language which was clearly designed to stand on the shoulders of giants, while providing modern capabilities for developers.

Elixir's abstractions

To store centralized <key, value>-like data, instead of a Singleton, Elixir's provides an Agent. It keeps state in memory and many processes can access and modify it without concurrency issues.

The language can spawn processes much like threads, using spawn_link, but you probably don't want to do that. You'd rather use a Task, which is basically async/await, or a Gen(eric)Server, a very cool abstraction that receives requests from other processes, spawns helper mini-servers and processes the results in parallel, for free.

All tasks can be controlled using the Supervisor, which holds other abstractions as its "children" and automatically restarts them when they crash.

Finally, your code is contained inside a single project which can manage different apps, with modules that hold functions. No packages, no classes, no objects. Modules, functions, structs and basic data types.

Dependency management is straightforward thanks to mix; builds and testing are handled by mix too. As opposed to other multi-tools like gradle, this one is really fast.

Is that too much to process? I felt that at first, too. Give it some time and your brain will eventually think in terms of Supervisors which manage GenServers which spawn Agents and Tasks when needed.

Let it crash

Elixir's mantra is to let processes crash. I found it shocking and counter-intuitive, but with some explanation it makes a lot of sense.

Neither developers want their code to crash nor Elixir promotes writing bad code. However, let's agree that there are many reasons besides bad programming which can make a software crash. If we have a server which runs stuff and at some point we have, say, 100 connections every second, one might crash eventually because of a bug in any component, hardware issues, a cosmic ray, or Murphy's law.

The question is: in the event of an unfortunate, unavoidable crash, how will your system react?

  1. Bring everything down?
  2. Try to capture the error and recover?
  3. Kill the crashed process and launch another one in its place?

For example, C uses approach 1. Most modern languages with Exceptions like Java and Python use 2. Elixir uses 3. This is not suitable for all environments, but it is perfect for those use cases which fit Elixir: concurrent network processes.

With Elixir, a single failure never brings the system down. What's more, it automatically restarts the crashed process, so the client can instantly retry and, unless there is a reproducible bug in your code, the fresh process will finish without an issue.

The bottom line is: a single client may be unlucky and crash at some point, but the rest of the system will never notice.

How to start?

Let's get our hands dirty. After reading many sites, watching hours of video and following a dozen tutorials, here are the resources I found the most valuable. I'd suggest following this order.

Getting started

  1. Madrid Elixir Meetup 2016-03. If you understand Spanish, this is the best intro to Elixir. Otherwise, watch All aboard the Elixir Express! which is a bit outdated but very comprehensive.
  2. Official "Getting Started" guide. It's the best and the most current. Follow it from start to finish, including the advanced chapters.
  3. Elixir School. A nice complement to the official guide. Most things are very similar, but the different approach on OTP will help you understand it better.
  4. Understanding Elixir's GenServer and Elixir's supervisors, a conceptual understanding are two short reads with yet another explanation of OTP features.
  5. Elixir Cheat Sheet. The best one out there

First projects

  1. vim-elixir-ide. Elixir support for vim, not the best plugin but suitable for beginners.
  2. Elixir examples. The Elixir guide covers all these, but it's handy to have common idioms on a single page: "string to list", "concatenate list", "optional function parameters", etc.
  3. Portal Game by José Valim. A complement to the sample project on the official guide.
  4. Elixir Koans and Exercism are mini exercises that you can use to improve your Elixir agility. On the same line, Elixir Golf proposes weekly puzzles to solve.
  5. Learning Elixir. Joseph Kain has a ton of content with mini projects and examples you can follow. Top quality.
  6. Excasts and Elixir sips have short screencasts that you can check out for reference
  7. ElixirConf videos contain very interesting talks which may be overwhelming for beginners, but are worth a look later on.
  8. Install Elixir and Phoenix on OSX. If you want to use Phoenix on OSX, you may need this help
  9. Phoenix Official Guide. Phoenix isn't necessary for simple web services, you can use Plug. But for large projects you'll need a framework. Nothing like the official guide.

Getting help

  1. Awesome Elixir. A list of Elixir resources, where I found many of these.
  2. Elixir Tip and Elixir Status regularly link to Elixir-related articles and videos, and Plataformatec Elixir posts is where the language authors share news and tips.
  3. If you have questions about code, try the Elixir forum first, the IRC channel or Slack. The developers would like to transition all help requests out of the Mailing list, which you can use for language-related discussions.
  4. /r/elixir if you're into Reddit

Closing thoughts

I think that's all for the moment. I hope this post can help some beginners to get their hands on the language and start writing production code as soon as possible.

For anyone who wants to know what's all the Elixir fuss about, it's difficult to explain, especially for somebody like me who has been programming in imperative languages all his life.

When I recommended Elixir to a friend, he replied, "A highly concurrent, functional language using the Erlang