Sam's news

Here are some of the news sources I follow.

My main website is at https://samwilson.id.au/.


Legal considerations regarding hosting a MediaWiki site

Published 27 Apr 2017 by Oliver K in Newest questions tagged mediawiki - Webmasters Stack Exchange.

What legal considerations are there when creating a wiki using MediaWiki for people to use worldwide?

For example, I noticed there are privacy policies & terms and conditions; are these required to safeguard me from any legal battles?


HHVM issue on ISPconfig MediaWiki installation

Published 27 Apr 2017 by rfnx in Newest questions tagged mediawiki - Stack Overflow.

I am about to setup Mediawiki on my VM (Debian Jessie, ISPconfig 3.1) and I get an error 500 when accessing the website. Here is my apache2 error.log:

[fastcgi:error] [pid 8066] (2)No such file or directory: [client 
51.15.70.216:56974] FastCGI: failed to connect to server 
"/var/www/clients/client1/web10/cgi-bin/hhvm-fcgi-[IP-address]-    
mydomain.com": connect() failed
[fastcgi:error] [pid 8066] [client 51.15.70.216:56974] FastCGI: 
incomplete headers (0 bytes) received from server 
"/var/www/clients/client1/web10/cgi-bin/hhvm-fcgi-[IP-address]-80-
mydomain.com"

all rights are set OK for the webfolders, can anybody tell me please what I can do to get my configuration working?

thanks!


Follow redirects when using action=raw

Published 26 Apr 2017 by BrianFreud in Newest questions tagged mediawiki - Stack Overflow.

Is there any argument to tell Mediawiki to follow redirects when using the action=raw argument? The 'redirects' argument doesn't seem to be valid when using action=raw. https://www.mediawiki.org/wiki/Manual:Parameters_to_index.php/en#Raw doesn't list any way to do this.

I'm trying to find a way to allow scripts to be cached - ie 'title=MediaWiki:Blah.v1.0.0.js&action=raw&ctype=text/javascript' - where MediaWiki:Blah.v1.0.0.js redirects to MediaWiki:Blah.js. That should allow me to break the cache as needed simply by moving the redirect page from MediaWiki:Blah.v1.0.0.js to MediaWiki:Blah.v1.0.1.js and so on.


Create Github Summary Page with Live info from .MD documents

Published 26 Apr 2017 by zenijos10 in Newest questions tagged mediawiki - Stack Overflow.

I have 100+ .md documents in an enterprise github repository. Each document has 5 items that I would like to put into a separate summary page that could double as a table of contents. For Example: Document Name, Author, Critical Dependencies, Contact Number, Escalation Number, Application Status.

Any of these 5 items can change at any time and they often due. Currently I have to make several updates and my goal is to consolidate my efforts.

I want to make it so that the summary document is pulling each category directly off of the .md document at least upon summary page load.

I'd like to handle this within the github framework or with our MEDIA WIKI. If necessary I can use Javascript, but honestly I have no idea where to even begin.

Can this be accomplished with the GITHUB WIKI, MEDIA WIKI, or USER/ PROJECT Pages and no additional infrastructure?

Please point me to an API, concept, code library, or actual code that can help to accomplish this.


Dive into the Semantic Data Lake

Published 26 Apr 2017 by Phil Archer in W3C Blog.

Back in October last year, I highlighted a EU-funded project we’re involved with around big data. Led by Sören Auer at the Fraunhofer Institute for Intelligent Analysis and Information Systems, Big Data Europe has built a remarkably flexible big data processing platform. It wraps a lot of well-known components like Apache Spark and HDFS in Docker containers, along with triple stores (4Store, Semagrow, Strabon) and more. Through a simple UI, you select the components you want, click and… it’s done. As many instances of whatever components you need all installed in whatever infrastructure you choose to create your processing pipeline.

Most interestingly from a W3C perspective, is the way it handles data variety – one of the 3 Vs of big data (or 4 Vs if you include veracity). Data is stored in whatever format its in – relational, CSV, XML, RDF, JSON – just as it is. That’s the data lake, or data swamp – choose your metaphor. What BDE does then is to apply a semantic layer on top of that so that you can run a SPAQRL query across all the data in the platform. A virtual graph is created at query time, with the SPARQL query deconstructed and individual bits of information pulled from whichever dataset is needed, then joined and returned as a single response. One of the lead engineers, Mohamed Nadjib Mami, explains more in this video. I heard about a similar approach being taken in a completely different context recently at the OGC’s Location Powers workshop in Delft. It’s an approach that has been shown to outperform the more usual approach of transforming everything into a single format at ingestion time as you only ever query a small portion of the data. Of course this all depends on semantics, vocabularies, URIs and links.

If you’d like to know more about the Big Data Europe Integrator Platform, including the Semantic Data Lake, please join us for our official launch Webinar next Wednesday, take a look at the project Web site, or dive right in to the GitHub repo.


Reminder: Google Hates Widget Links

Published 26 Apr 2017 by Ipstenu (Mika Epstein) in Make WordPress Plugins.

In a blog post last year, Google reminded users that widgets are cool, but widget links are the suxxor.

Widgets can help website owners enrich the experience of their site and engage users. However, some widgets add links to a site that a webmaster did not editorially place and contain anchor text that the webmaster does not control. Because these links are not naturally placed, they’re considered a violation of Google Webmaster Guidelines.

If you look at the examples, they highlight things that many plugin (and theme) developers would consider acceptable links.

What does this mean for you? It means your powered-by link may adversely impact your users. Be smart. Make your links no-follow.

Also as a reminder: Any and all powered-by links and credits must be opt-in. That is a site owner must make the conscious and informed decision to display your credits. You cannot have them show by default, you cannot have them be opt-out, and you cannot hide them in display:none code or any other way that embeds a clickable link. Code comments like <!-- Powered by WordPress --> is fine.


get editors of each wikipedia page

Published 26 Apr 2017 by Eve.F in Newest questions tagged mediawiki - Stack Overflow.

I am trying to find the user id of each major editor for every single wikipedia article. I looked up MediaWiki and found out that it has a revision API that can provide me information about editor id and size of revision given the article ID. My current strategy is to download wiki dump for all articles, manually input the article names to the query, copy the results to a local text file, and parse it later. I am just wondering if there is anything I can do to automate the process as the whole wikipedia is very big. Any improvement can be helpful. Thanks.


Query mediawiki api

Published 25 Apr 2017 by Aaron Owen in Newest questions tagged mediawiki - Stack Overflow.

Bottom line I want to get just the "area_total_km2" part of this response

I'm unsure how to structure the query though to get the response of just that part into a json file


Installation of MediaWiki Vagrant - error logs

Published 25 Apr 2017 by Cywil in Newest questions tagged mediawiki - Stack Overflow.

I try to install MediaWiki Vagrant. Also, I follow this page.

I'm on a very new Ubuntu 16.04 installation and I've got this error message :

==> default: Error: /usr/local/bin/multiversion-install /vagrant/mediawiki --wiki wiki --dbname wiki --dbpass wikipassword --dbuser wikiadmin --pass vagrant --scriptpath /w --server http://dev.wiki.local.wmftest.net:8080 --confpath /vagrant/settings.d/wikis/wiki  wiki Admin
==> default:  returned 1 instead of one of [0]
==> default: Error: /Stage[main]/Mediawiki/Mediawiki::Wiki[devwiki]/Exec[wiki_setup]/returns: change from notrun to 0 failed: /usr/local/bin/multiversion-install /vagrant/mediawiki --wiki wiki --dbname wiki --dbpass wikipassword --dbuser wikiadmin --pass vagrant --scriptpath /w --server http://dev.wiki.local.wmftest.net:8080 --confpath /vagrant/settings.d/wikis/wiki  wiki Admin
==> default:  returned 1 instead of one of [0]

Here are all my logs :

cywil@cywil-GT70-2OC-2OD:~$ cd vagrant
cywil@cywil-GT70-2OC-2OD:~/vagrant$ ./setup.sh

You're all set! Simply run `vagrant up` to boot your new environment.

(Or try `vagrant config --list` to see what else you can tweak.)
cywil@cywil-GT70-2OC-2OD:~/vagrant$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'debian/contrib-jessie64' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
    default: Adapter 2: hostonly
==> default: Forwarding ports...
    default: 8080 (guest) => 8080 (host) (adapter 1)
    default: 443 (guest) => 4430 (host) (adapter 1)
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
    default: The guest additions on this VM do not match the installed version of
    default: VirtualBox! In most cases this is fine, but in rare cases it can
    default: prevent things such as shared folders from working properly. If you see
    default: shared folder errors, please make sure the guest additions within the
    default: virtual machine match the version of VirtualBox you have installed on
    default: your host and reload your VM.
    default: 
    default: Guest Additions Version: 4.3.36
    default: VirtualBox Version: 5.0
==> default: Setting hostname...
==> default: Configuring and enabling network interfaces...
==> default: Mounting shared folders...
    default: /vagrant => /home/cywil/vagrant
    default: /vagrant/logs => /home/cywil/vagrant/logs
==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run.

==> default: Machine 'default' has a post `vagrant up` message. This is a message
==> default: from the creator of the Vagrantfile, and not from Vagrant itself:
==> default: 
==> default: Vanilla Debian box. See https://atlas.hashicorp.com/debian/ for help and bug reports
cywil@cywil-GT70-2OC-2OD:~/vagrant$ vagrant roles list
Available roles:

  abusefilter                inputbox                   restbase                
  accountinfo                interwiki                  revisionslider          
  analytics                  invitesignup               sal                     
  antispam                   jsduck                     sandboxlink             
  antispoof                  jsonconfig                 scholarships            
  apex                       kafka                      score                   
  apparmor                   kartographer               scribunto               
  articleplaceholder         kartographerwv             securepoll              
  babel                      keystone                   semanticextraspecialproperties
  betafeatures               l10nupdate                 semanticmediawiki       
  buggy                      labeledsectiontransclusion   semanticresultformats   
  campaigns                  langwikis                  semantictitle           
  cassandra                  ldapauth                   sentry                  
  categorytree               liquidthreads              shorturl                
  centralauth                livingstyleguide           simple_miser            
  centralnotice              lockdown                   simple_performant       
  checkuser                  loginnotify                sitematrix              
  cirrussearch               maps                       spark                   
  cite                       massaction                 statsd                  
  citoid                     massmessage                striker                 
  cldr                       math                       svg                     
  codeeditor                 mathoid                    swift                   
  cologneblue                mathsearch                 templatedata            
  commons                    memcached                  templatesandbox         
  commons_datasets           mleb                       templatestyles          
  commonsmetadata            mobileapp                  testwiki                
  confirmedit                mobilecontentservice       textextracts            
  contactpage                mobilefrontend             three_d                 
  contenttranslation         modern                     throttleoverride        
  disableaccount             molhandler                 thumb_on_404            
  disambiguator            * monobook                   thumbor                 
  doublewiki                 multimedia                 tidy                    
  easytimeline               multimediaviewer           timedmediahandler       
  echo                       navigationtiming           timeless                
  education                  newsletter                 titleblacklist          
  elk                        newusermessage             torblock                
  emailauth                  notebook                   translate               
  embedvideo                 nuke                       uls                     
  eventbus                   oathauth                   uploadslink             
  eventlogging               oauth                      uploadwizard            
  externalstore              oauthauthentication        urlgetparameters        
  featuredfeeds              oozie                      urlshortener            
  fileannotations            openbadges                 usermerge               
  flaggedrevs                ores                       variables               
  flow                       pageassessments            varnish                 
  fss                        pagedtiffhandler           vectorbeta              
  fundraising                pageimages                 vipsscaler              
  gadgets                    pagetriage                 visualeditor            
  gadgets2                   pageviewinfo               warnings_as_errors      
  geodata                    parserfunctions            widgets                 
  geodata_elastic            parsoid                    wikibase_repo           
  geshi                      payments                   wikidata                
  gettingstarted             pdfhandler                 wikidatapagebanner      
  globalblocking             performanceinspector       wikidiff2               
  globalcssjs                phabricator                wikieditor              
  globalusage                phptags                    wikigrok                
  globaluserpage             phragile                   wikihiero               
  gpgmail                    pipeescape                 wikilove                
  graph                      poem                       wikimediaevents         
  graphoid                   poolcounter                wikimediaflow           
  greystuff                  popups                     wikimediaincubator      
  guidedtour                 private                    wikimediamaintenance    
  gwtoolset                  proofreadpage              wikimediamessages       
  hadoop                     psr3                       wikimetrics             
  headertabs                 questycaptcha              wikispeech              
  hive                       quicksurveys               wikitech                
  https                      quips                      xanalytics              
  hue                        quiz                       xhprofgui               
  iabot                      raita                      youtube                 
  iegreview                  relatedarticles            zend                    
  imagemetrics               renameuser                 zero                    

Roles marked with '*' are enabled.
Note that roles enabled by dependency are not marked.
Use `vagrant roles enable` & `vagrant roles disable` to customize.
cywil@cywil-GT70-2OC-2OD:~/vagrant$ vagrant roles enable monobook
Ok. Run `vagrant provision` to apply your changes.
cywil@cywil-GT70-2OC-2OD:~/vagrant$ vagrant provision
==> default: Running provisioner: lsb_check...
==> default: Running provisioner: shell...
    default: Running: /tmp/vagrant-shell20170425-9233-r1b890.sh
==> default: Running provisioner: puppet...
==> default: Running Puppet with site.pp...
==> default: Info: Loading facts
==> default: Notice: Compiled catalog for mediawiki-vagrant.dev in environment production in 4.18 seconds
==> default: Info: Applying configuration version '1493114917.f0b87099'
==> default: Notice: /Stage[main]/Npm/Exec[npm_set_cache_dir]/returns: executed successfully
==> default: Notice: /Stage[main]/Mediawiki/Mediawiki::Wiki[devwiki]/Exec[wiki_setup]/returns: Could not open input file: /vagrant/mediawiki/maintenance/install.php
==> default: Error: /usr/local/bin/multiversion-install /vagrant/mediawiki --wiki wiki --dbname wiki --dbpass wikipassword --dbuser wikiadmin --pass vagrant --scriptpath /w --server http://dev.wiki.local.wmftest.net:8080 --confpath /vagrant/settings.d/wikis/wiki  wiki Admin
==> default:  returned 1 instead of one of [0]
==> default: Error: /Stage[main]/Mediawiki/Mediawiki::Wiki[devwiki]/Exec[wiki_setup]/returns: change from notrun to 0 failed: /usr/local/bin/multiversion-install /vagrant/mediawiki --wiki wiki --dbname wiki --dbpass wikipassword --dbuser wikiadmin --pass vagrant --scriptpath /w --server http://dev.wiki.local.wmftest.net:8080 --confpath /vagrant/settings.d/wikis/wiki  wiki Admin
==> default:  returned 1 instead of one of [0]
==> default: Notice: /Stage[main]/Mediawiki/Mediawiki::Wiki[devwiki]/Exec[wiki_include_extra_settings]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki/Mediawiki::Wiki[devwiki]/Exec[wiki_include_extra_settings]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki/Mediawiki::Wiki[devwiki]/Exec[wiki_copy_LocalSettings]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki/Mediawiki::Wiki[devwiki]/Exec[wiki_copy_LocalSettings]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Ready_service/Systemd::Service[mediawiki-ready]/File[/lib/systemd/system/mediawiki-ready.service]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Ready_service/Systemd::Service[mediawiki-ready]/File[/lib/systemd/system/mediawiki-ready.service]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Ready_service/Systemd::Service[mediawiki-ready]/Exec[systemd reload for mediawiki-ready]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Ready_service/Systemd::Service[mediawiki-ready]/Exec[systemd reload for mediawiki-ready]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Ready_service/Systemd::Service[mediawiki-ready]/Service[mediawiki-ready]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Ready_service/Systemd::Service[mediawiki-ready]/Service[mediawiki-ready]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Hhvm::Fcgi/Systemd::Service[hhvm]/File[/etc/systemd/system/hhvm.service.d]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Hhvm::Fcgi/Systemd::Service[hhvm]/File[/etc/systemd/system/hhvm.service.d]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Hhvm::Fcgi/Systemd::Service[hhvm]/File[/etc/systemd/system/hhvm.service.d/puppet-override.conf]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Hhvm::Fcgi/Systemd::Service[hhvm]/File[/etc/systemd/system/hhvm.service.d/puppet-override.conf]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Hhvm::Fcgi/Systemd::Service[hhvm]/Exec[systemd reload for hhvm]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Hhvm::Fcgi/Systemd::Service[hhvm]/Exec[systemd reload for hhvm]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Hhvm::Fcgi/Systemd::Service[hhvm]/Service[hhvm]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Hhvm::Fcgi/Systemd::Service[hhvm]/Service[hhvm]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki/Mediawiki::Group[devwiki_suppress]/Mediawiki::Settings[devwiki_suppress_group]/File[/vagrant/settings.d/wikis/wiki/settings.d/puppet-managed/10-devwiki_suppress_group.php]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki/Mediawiki::Group[devwiki_suppress]/Mediawiki::Settings[devwiki_suppress_group]/File[/vagrant/settings.d/wikis/wiki/settings.d/puppet-managed/10-devwiki_suppress_group.php]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki/Exec[update_all_databases]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki/Exec[update_all_databases]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki/Mediawiki::User[admin_user_in_steward_suppress_on_wiki]/Mediawiki::Maintenance[mediawiki_user_Admin_wiki]/Exec[mediawiki_user_Admin_wiki]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki/Mediawiki::User[admin_user_in_steward_suppress_on_wiki]/Mediawiki::Maintenance[mediawiki_user_Admin_wiki]/Exec[mediawiki_user_Admin_wiki]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki/Mediawiki::User[admin_user_in_steward_suppress_on_wiki]/Mediawiki::Maintenance[mediawiki_user_Admin_wiki_steward,suppress]/Exec[mediawiki_user_Admin_wiki_steward,suppress]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki/Mediawiki::User[admin_user_in_steward_suppress_on_wiki]/Mediawiki::Maintenance[mediawiki_user_Admin_wiki_steward,suppress]/Exec[mediawiki_user_Admin_wiki_steward,suppress]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki/Mediawiki::Import::Text[Main_Page]/Mediawiki::Maintenance[add page devwiki/Main_Page]/Exec[add page devwiki/Main_Page]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki/Mediawiki::Import::Text[Main_Page]/Mediawiki::Maintenance[add page devwiki/Main_Page]/Exec[add page devwiki/Main_Page]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki/Mediawiki::Import::Text[Template:Main_Page]/Mediawiki::Maintenance[add page devwiki/Template:Main_Page]/Exec[add page devwiki/Template:Main_Page]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki/Mediawiki::Import::Text[Template:Main_Page]/Mediawiki::Maintenance[add page devwiki/Template:Main_Page]/Exec[add page devwiki/Template:Main_Page]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Jobrunner/File[/etc/default/jobrunner]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Jobrunner/File[/etc/default/jobrunner]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Jobrunner/File[/etc/jobrunner.json]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Jobrunner/File[/etc/jobrunner.json]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Jobrunner/File[/etc/logrotate.d/mediawiki_jobrunner]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Jobrunner/File[/etc/logrotate.d/mediawiki_jobrunner]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Jobrunner/File[/etc/logrotate.d/mediawiki_jobchron]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Jobrunner/File[/etc/logrotate.d/mediawiki_jobchron]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Jobrunner/Git::Clone[mediawiki/services/jobrunner]/File[/vagrant/srv/jobrunner]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Jobrunner/Git::Clone[mediawiki/services/jobrunner]/File[/vagrant/srv/jobrunner]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Jobrunner/Git::Clone[mediawiki/services/jobrunner]/Exec[git_clone_mediawiki/services/jobrunner]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Jobrunner/Git::Clone[mediawiki/services/jobrunner]/Exec[git_clone_mediawiki/services/jobrunner]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Jobrunner/Service::Gitupdate[jobrunner]/File[/etc/mw-vagrant/services/jobrunner.conf]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Jobrunner/Service::Gitupdate[jobrunner]/File[/etc/mw-vagrant/services/jobrunner.conf]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Jobrunner/Service::Gitupdate[jobchron]/File[/etc/mw-vagrant/services/jobchron.conf]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Jobrunner/Service::Gitupdate[jobchron]/File[/etc/mw-vagrant/services/jobchron.conf]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Jobrunner/Systemd::Service[jobrunner]/File[/lib/systemd/system/jobrunner.service]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Jobrunner/Systemd::Service[jobrunner]/File[/lib/systemd/system/jobrunner.service]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Jobrunner/Systemd::Service[jobrunner]/Exec[systemd reload for jobrunner]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Jobrunner/Systemd::Service[jobrunner]/Exec[systemd reload for jobrunner]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Jobrunner/Systemd::Service[jobrunner]/Service[jobrunner]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Jobrunner/Systemd::Service[jobrunner]/Service[jobrunner]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Jobrunner/Systemd::Service[jobchron]/File[/lib/systemd/system/jobchron.service]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Jobrunner/Systemd::Service[jobchron]/File[/lib/systemd/system/jobchron.service]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Jobrunner/Systemd::Service[jobchron]/Exec[systemd reload for jobchron]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Jobrunner/Systemd::Service[jobchron]/Exec[systemd reload for jobchron]: Skipping because of failed dependencies
==> default: Notice: /Stage[main]/Mediawiki::Jobrunner/Systemd::Service[jobchron]/Service[jobchron]: Dependency Exec[wiki_setup] has failures: true
==> default: Warning: /Stage[main]/Mediawiki::Jobrunner/Systemd::Service[jobchron]/Service[jobchron]: Skipping because of failed dependencies
==> default: Notice: Finished catalog run in 5.22 seconds
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

What can I do ?

Many thanks by advanced for your answers !

Cyril


Why country singers are the chroniclers of our age

Published 25 Apr 2017 by in New Humanist Articles and Posts.

A new generation of female singers are rejecting patriotic tub-thumping and documenting the dark heart of rural America.

cryptic error DBLoadBalancer error while setting up MediaWiki

Published 24 Apr 2017 by Ian in Newest questions tagged mediawiki - Stack Overflow.

I'm experiencing a weird error while trying to install MediaWiki v1.28.1 on MacOSX El Capitan. My stack is Apache 2.4.18, PHP 7.1 & MySQL 5.7

When trying to do the web installation from localhost, a message displays saying 'LocalSettings.php not found.' however below it is a link to set up the wiki. When i click this link I get a stack trace that I cant get to to the bottom of. The screen lists a deprecated function mycrypt_create_iv() message and also complains about ServiceContainer.php : Service disabled: DBLoadBalancer I've done some searching and this seems to be a common problem and the solution is allegedly related to folder permissions for php sessions. See Similar error

My PHP INI 'session.save_path': /var/lib/php/sessions I've ensured this location is readable and writable using chmod 777 (rwx for all) on this folder. I seem to get a sess_* file created in this location but it is always empty. I've created a test php script that runs on the same web server and starts a session and writes to a session file in the /var/lib/php/sessions location and this works fine. So I'm not sure if this is permission related in this case? I have also given the htdocs location in my case called mediawiki and it's subfolder and files full read and write ability using chmod 777.

I have run out of ideas after a full day of investigation. I dont know what I dont know :)


Multiple Values in one "SMW Page Forms" field

Published 23 Apr 2017 by TotzKuete in Newest questions tagged mediawiki - Stack Overflow.

I ran into a problem with Semantic Mediawiki using the Page Forms extension. I wanted to create a field in a Page Form, that can take more than one value. So I decided to use the tokens input type.

The problem is the following: If I type some values into the form field and save the page, Page Form puts all the values - seperated with commas - into one single SMW value.

For example: I have a form that will create a page about a scientific paper. And in this form I have a field that is called Authors. And when I fill the field with two Authors, lets say Pascal and Tesla, then the final page does not have the two SMW values [[Author::Pascal]] and [[Author::Tesla]] - It has the SMW value [[Author::Pascal, Tesla]].

Does anyone know, how I can achieve the mapping from different values in the form field to different values as SMW strings?

Thanks and greets, J


Lenovo ThinkPad Carbon X1 (gen. 5)

Published 22 Apr 2017 by Sam Wilson in Sam's notebook.

Five years, two months, and 22 days after the last time, I’m retiring my laptop and moving to a new one. This time it’s a Lenovo ThinkPad Carbon X1, fifth generation (manufactured in March this year, if the packaging is to be believed). This time, I’m not switching operating systems (although I am switching desktop’s, to KDE, because I hear Ubuntu is going all-out normal Gnome sometime soon).

So I kicked off the download of kubuntu-16.04.2-desktop-amd64.iso and while it was going started up the new machine. I jumped straight into bios to set the boot order (putting ‘Windows boot manager’ right at the bottom because it sounds like something predictably annoying), and hit ‘save’. Then I forgot what I was doing and wondered back to my other machine, leaving the new laptop to reboot and send itself into the Windows installation process. Oops.

There’s no way out! You select the language you want to use, and then are presented with the EULA—with a but ‘accept’ button, but no way to decline the bloody thing, and no way to restart the computer! Even worse, a long-press on the power button just suspended the machine, rather than force-booting it. In the end some combination of pressing on the power button while waking from suspend tricked it into dying. Then it was a simple matter of booting from a thumb drive and getting Kubuntu installed.

I got slightly confused at two points: at having to turn off UEFI (which I think is the ‘Windows boot manager’ from above?) in order to install 3rd party proprietary drivers (usually Lenovo are good at providing Linux drivers, but more on that later); and having to use LVM in order to have full-disk encryption (because I had thought that it was usually possible to encrypt without LVM, but really I don’t mind either way; there doesn’t seem to be any disadvantage to using LVM; I then of course elected to not encrypt my home directory).

So now I’m slowly getting KDE set up how I like it, and am running into various problems with the trackpoint, touchpad, and Kmail crashing. I’ll try to document the more interesting bits here, or add to the KDE UserBase wiki.


mosh, the disconnection-resistant ssh

Published 22 Apr 2017 by Carlos Fenollosa in Carlos Fenollosa — Blog.

The second post on this blog was devoted to screen and how to use it to make persistent SSH sessions.

Recently I've started using mosh, the mobile shell. It's targeted to mobile users, for example laptop users who might get short disconnections while working on a train, and it also provides a small keystroke buffer to get rid of network lag.

It really has little drawbacks and if you ever ssh to remote hosts and get annoyed because your vim sessions or tail -F windows get disconnected, give mosh a try. I strongly recommend it.

Tags: software, unix

Comments? Tweet  


How can I load a javascript library through the browser javascript engine before the Mediawiki resource loader does?

Published 21 Apr 2017 by ggworean in Newest questions tagged mediawiki - Stack Overflow.

I have a third party library that's rather robust and contains a ton of importstatements that are conflicted with the node.js running on the server. I want to be able to use the library but I can't access the module because of the import statements. Also adding it as a module to be added doesn't really work because it's then processed through node.js's javascript engine and throws errors when using import instead of, I'm guessing, require().

I also have the library installed via npm and it's showing up inside of node_modules however when I do var Quill = require("quill') it doesn't load proper as if it doesn't recognize it's an npm dependency.


KDE PIM update for Zesty available for testers

Published 20 Apr 2017 by rikmills in Kubuntu.

Since we missed by a whisker getting updated PIM (kontact, kmail, akregator, kgpg etc..) into Zesty for release day, and we believe it is important that our users have access to this significant update, packages are now available for testers in the Kubuntu backports landing ppa.

While we believe these packages should be relatively issue-free, please bear in mind that they have not been tested as comprehensively as those in the main ubuntu archive.

Testers should be prepared to troubleshoot and hopefully report issues that may occur. Please provide feedback on our mailing list [1], IRC [2], or optionally via social media.

After a period of testing and verification, we hope to move this update to the main backports ppa.

You should have some command line knowledge before testing.
Reading about how to use ppa purge is also advisable.

How to test KDE PIM 16.12.3 for Zesty:

Testing packages are currently in the Kubuntu Backports Landing PPA.

sudo add-apt-repository ppa:kubuntu-ppa/backports-landing
sudo apt-get update
sudo apt-get dist-upgrade

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net


v2.3.2

Published 20 Apr 2017 by fabpot in Tags from Twig.


v1.33.2

Published 20 Apr 2017 by fabpot in Tags from Twig.


Does Wikipedia uses Mobilefrontend extension AND mobile subomains at the same time?

Published 20 Apr 2017 by Atef Wagih in Newest questions tagged mediawiki - Stack Overflow.

I have setup a Mediawiki family and installed the Mediawiki Mobilefrontend Extension for better mobile usability.

The extension documentation says that it is used on Wikimedia projects (like Wikipedia).

The extension gives the same look as what Wikipdia looks like on mobile, however, I noticed a big difference.

The Wikipedia on mobile seems to go to a mobile subdomain, in addition to the format that the Moileforntend Extension provides.

for example, a page about the "World Cup" has this url when viewed on a desktop computer: https://en.wikipedia.org/wiki/World_Cup

while it has this url when viewed from a mobile phone: https://en.m.wikipedia.org/wiki/World_Cup

My questions are:

1- Is this really a redirection to a mobile subdomain? or a mirror installation?

2- What are the benefits of redirecting to a subdomain while the Mobiefrontend extension provides the formatting already.

3- How does the data gets synchronized between the main site and the mobile site?


Why do we use reason to reach nonsensical conclusions?

Published 20 Apr 2017 by in New Humanist Articles and Posts.

Q&A with Hugo Mercier and Dan Sperber, authors of a new book about the evolution of reason.

Running a patch on PonyDocs on windows machine with XAMPP raises the error "1 out of 1 hunk FAILED -- saving rejects to file LocalSettings.php.rej"

Published 20 Apr 2017 by Umang Buddhdev in Newest questions tagged mediawiki - Stack Overflow.

While running a PonyDocs patch on MediaWiki 1.28 on Windows machine, i am facing the below errors.

C:\xampp\htdocs\mediawiki>patch -F3 -i C:\xampp\htdocs\mediawiki\ponydocs-master \MediaWiki.patch -p 1 can't find file to patch at input line 5 Perhaps you used the wrong -p or --strip option?

The text leading up to this was:

|Index: includes/page/Article.php |=================================================================== |--- includes/page/Article.php (revision 572)

|+++ includes/page/Article.php (revision 576)

File to patch: LocalSettings.php patching file LocalSettings.php Hunk #1 FAILED at 147. 1 out of 1 hunk FAILED -- saving rejects to file LocalSettings.php.rej

I want to install the Ponydocs on base MediaWiki installation on windows machine.

Thanks Umang Buddhdev


List of available translations in MediaWiki with Translate and ULS

Published 19 Apr 2017 by RYN in Newest questions tagged mediawiki - Stack Overflow.

I created a wiki using mediawiki 1.28 and installed Translate & UniveralLanguageSelector extensions to create pages in two languages.
I need to create a list of available translations for current page (something like interwiki links of wikipedia)

How can i have a list of available translations of each page ?


list=allpages does not deliver all pages

Published 19 Apr 2017 by Chris Dji in Newest questions tagged mediawiki - Stack Overflow.

i have the problem, that i want to fill a list with the names of all pages in my wiki. My script:

$TitleList = [];
$nsList = [];

$nsURL= 'wiki/api.php?action=query&meta=siteinfo&   siprop=namespaces|namespacealiases&format=json';
$nsJson = file_get_contents($nsURL);
$nsJsonD = json_decode($nsJson, true);
foreach ($nsJsonD['query']['namespaces'] as $ns)
{
  if ( $ns['id'] >= 0 )
    array_push ($nsList, $ns['id']);    
}

# populate the list of all pages in each namespace
foreach ($nsList as $n)
{
  $urlGET = 'wiki/api.php?action=query&list=allpages&apnamespace='.$n.'&format=json';
  $json = file_get_contents($urlGET);
  $json_b = json_decode( $json ,true); 

  foreach  ($json_b['query']['allpages'] as $page)
  {    
    echo("\n".$page['title']);
    array_push($TitleList, $page["title"]);
  }
}

But there are still 35% pages missing, that i can visit on my wiki (testing with "random site"). Does anyone know, why this could happen?


In conversation with the J.S. Battye Creative Fellows

Published 19 Apr 2017 by carinamm in State Library of Western Australia Blog.

How can contemporary art lead to new discoveries about collections and ways of engaging with history?  Nicola Kaye and Stephen Terry will discuss this idea drawing from the experience of creating Tableau Vivant and the Unobserved.

In conversation with the J.S. Battye Creative Fellows
Thursday 27 April, 6pm
State Library Theatre.

April 4 Tableau Vivant Image_darkened_2

Tableau Vivant and the Unobserved is the culmination of the State Library’s inaugural J.S. Battye Creative Fellowship.  The Creative Fellowship aims to enhance engagement with the Library’s heritage collections and provide new experiences for the public.

Tableau Vivant and the Unobserved
visually questions how history is made, commemorated and forgotten. Through digital art installation, Nicola Kaye and Stephen Terry expose the unobserved and manipulate our perception of the past.  Their work juxtaposes archival and contemporary imagery to create an interactive experience for the visitor where unobserved lives from the archive collide with the contemporary world. The installation is showing at the State Library until 12 May 2017.

For more information visit: http://www.slwa.wa.gov.au


Filed under: community events, Exhibitions, Pictorial, SLWA collections, SLWA displays, SLWA Exhibitions, SLWA news, State Library of Western Australia, talks, Western Australia Tagged: contemporary art, discussion, installation, J.S. Battye Creative Fellowship, Nicola Kaye, Stephen Terry, talk

Wikipedia API how to automatically redirect to commonly referred content?

Published 18 Apr 2017 by joony0123 in Newest questions tagged mediawiki - Stack Overflow.

Hi I am trying to use Wiki API and right now it works great. My url has exsenteces=1 so I can get only the first sentence of the content. However, sometimes if the query has a list of different meanings, then it gives "commonly refers to ~".

for example : https://ed.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exsentences=1&exintro=&explaintext=&redirects&titles=Trump

The extract of this gives me "Trump commonly refers to:".

While I want "Donald John Trump (born June 14, 1946) is the 45th and current President of the United States." in Donald J Trump wiki page.

It seems the 'redirects' does not redirect to the most commonly referred article. Is there a way to achieve this?

Thanks in advance :)


v2.3.1

Published 18 Apr 2017 by fabpot in Tags from Twig.


v1.33.1

Published 18 Apr 2017 by fabpot in Tags from Twig.


How W3C checks its specifications for accessibility support: APA review

Published 18 Apr 2017 by Michael Cooper in W3C Blog.

The Accessible Platform Architectures (APA) Working Group works to ensure W3C specifications provide support for accessibility to people with disabilities. The group seeks new accessibility and technology experts to help influence a broad set of W3C specifications.

What we do

A primary APA responsibility is the review of W3C Technical Reports for potential benefits or concerns for web accessibility. W3C’s wide review process provides opportunity for groups like APA to submit comments to Working Groups developing these documents and work together on ways to better meet accessibility opportunities and mitigate accessibility risks in these technologies.

Many of the specifications that need comments impact the user interface (such as HTML, CSS, and SVG) and may require additional features to ensure content or interaction can be made available to users in alternate forms. While this is the layer where accessibility issues are most often predicted, the APA WG has found a need to review other types of technologies as well. For instance, transmission protocols and interchange APIs need to ensure accessibility-specific information is not omitted from content. Review of requirements and best practices helps to identify ways a technology can benefit accessibility in unexpected ways or to determine the need to perform early engineering of accessibility solutions. Therefore APA looks at every Technical Report that is published.

How we work

About half of the documents we review are quickly determined not to need in-depth review, and about a third of the remainder are found upon review not to need accessibility considerations addressed. The remaining documents go into a more intensive review process which may require developing comments or returning to the specification after the content matures. Sometimes this leads to more extensive projects, which in the past has included creation of joint task forces for media accessibility, web payments accessibility and CSS accessibility which help engineer solutions and have produced documents like Media Accessibility User Requirements. Usually, though, reviews become comments to the developers of specifications. Over the years, APA and other groups have submitted accessibility-related comments on scores of W3C specifications and notes.

Who we need

All of these paths require considerable expertise within the APA WG. Even the half of documents that are only reviewed lightly require people with sufficient understanding of the base technology and of potential accessibility issues to make a determination. The more in-depth reviews can require considerable knowledge of the base technology as well as understanding of potential barriers to people with various types of disabilities, and the ability to work with other groups to engineer solutions. No one person can provide this expertise for the wide range of technologies now under development at W3C, so a quorum of engaged experts is critical to the success of the APA mission.

How to contribute

This is where we say, we need your help. It is a big responsibility to be the first point of contact for accessibility of such a wide-ranging set of specifications that have great impact on so many lives today. Getting involved in this work is a unique opportunity to learn about a wide variety of technologies and to bring your accessibility expertise to bear in creative ways. The APA Working Group brings together a global set of professionals who complement each others’ experience to make meaningful impact on the universality of the Web. Participation is open to representatives of W3C Member organizations, and we can invite experts who do not work for those types of organizations. Please consider if you might have a role in ensuring Accessible Platform Architectures for the World Wide Web. See the participation page or contact Janina Sajka for information on how you can get involved, and please come help make the Web accessible!


In search of whiteness

Published 18 Apr 2017 by in New Humanist Articles and Posts.

Identity politics is back with a vengeance in 2017 – but one particular kind of identity is often left unexplored. Lola Okolosie and Vron Ware ask why.

Template loop detected - which setting will fix this?

Published 18 Apr 2017 by NeatNit in Newest questions tagged mediawiki - Stack Overflow.

In the smallest example I could create, I have 2 pages.

User:NeatNit/loop1:

loop1

{{{1|{{User:NeatNit/loop1|end}}}}}

Output is as expected:

loop1

loop1

end

User:NeatNit/loop:

start

{{User:NeatNit/loop1}}

Expected output:

start

loop1

loop1

end

Actual output:

start

loop1

Template loop detected: User:NeatNit/loop1

What can be changed to stop this from happening? If relevant, according to Special:Version this is Mediawiki 1.18.1.

Note: I am a user, but I have contact with the admin of the wiki.


Importing to Piwigo

Published 17 Apr 2017 by Sam Wilson in Sam's notebook.

Piwigo is pretty good!

I mean, I mostly use Flickr at the moment, because it is quick, easy to recommend to people, and allows photos to be added to Trove. But I’d rather host things myself. Far easier for backups, and so nice to know that if the software doesn’t do a thing then there’s a possibility of modifying it.

To bulk import into Piwigo one must first rsync all photos into the galleries/ directory. Then, rename them all to not have any unwanted characters (such as spaces or accented characters). To do this, first have a look at the files that will fail:

find -regex '.*[^a-zA-Z0-9\-_\.].*'

(The regex is determined by $conf['sync_chars_regex'] in include/config_default.inc.php which defaults to ^[a-zA-Z0-9-_.]+$.)

Then you can rename the offending files (replace unwanted characters with underscores) by extending the above command with an exec option:

find -regex '.*[^a-zA-Z0-9\-\._].*' -exec rename -v -n "s/[^a-zA-Z0-9\-\._\/]/_/g" {} \;

(I previously used a more complicated for-loop for this, that didn’t handle directories.)

Once this command is showing what you expect, remove the -n (“no action”) switch and run it for real. Note also that the second regex includes the forward slash, to not replace directory separators. And don’t worry about it overwriting files whose normalized names match; rename will complain if that happens (unless you pass the --force option).

Once all the names are normalized, use the built-in synchronization feature to update Piwigo’s database.

At this point, all photos should be visible in your albums, but there is one last step to take before all is done, for maximum Piwigo-grooviness. This is to use the Virtualize plugin to turn all of these ‘physical’ photos into ‘virtual’ ones (so they can be added to multiple albums etc.). This plugin comes with a warning to ensure that your database is backed up etc. but personally I’ve used it dozens of times on quite large sets of files and never had any trouble. It seems that even if it runs out of memory and crashes halfway, it doesn’t leave anything in an unstable state (of course, you shouldn’t take my word for it…).


Planet Freo updates

Published 17 Apr 2017 by Sam Wilson in Sam's notebook.

Fremantle Bid has been added to Planet Freo, and Moore and Moore café seems to have let their web hosting lapse so have been temporarily removed.


Shed doors

Published 17 Apr 2017 by Sam Wilson in Sam's notebook.

My new house didn’t have a shed, but just a carport with no fourth wall (it was brilliant in every other respect, really—even insulated in the ceiling). So, as part of the WMF’s “Spark Project” (that aims to encourage employees to do more than just be wiki geeks), I decided to turn the carport into a shed by adding a set of wooden ledge-and-brace doors. There was a deadline of April 18 (i.e. tomorrow).

This post documents the process up to the point of being ready to hang the doors. Unfortunately, the hinges aren’t back from the galvanizer’s yet (or haven’t even been welded? Zoran the welder wasn’t communicating with me over the Easter break) so the project is incomplete; I’ll post more when the doors are up.

All of these photos and a few more are in a Flickr album.

Design

How wide is not wide enough, or what is the absolute minimum garage door size that will still fit a (small) car? I settled on 2.2 m, and subsequent testing has confirmed that this is fine for most cars—not that cars will be allowed in this shed, mind.

Some changes were made as construction progressed: the double studs either side of the door were turned 90° in order that the hinge bolts be able to add some extra joining strength between them; the sizing of all timber was adjusted to match what was available. Mostly things turned out as planned though.

Timber

Fremantle Timber Traders

I wasn’t sure what to build the doors with, but heading to Freo Timber Traders (above) and finding a couple of packs of nice old Wandoo settled it.

Selecting the boards

The 60×19 for the cladding came from a house in Wembley; the 110×28 for the ledges and braces came from an old shoe factory in Maylands. The ex-factory floor was covered in machine oil, and full of holes from where the machines had been bolted down. None were in any awkward spots though, and as I was planning on oiling the finished product I wasn’t too worried about the oil.

Doors

The first thing to do was to prepare the timber for the ledges and braces by removing the tongues and grooves with a draw-knife and plane (shown below). I wasn’t too worried about making these edges pristine or accurate; these are shed doors not furniture and I rather like the rough, used, look. It was also stinking hot while I was hacking away at these, and there’s something viscerally satisfying about draw-knives, sweat, and following the grain of the timber (and what shitty grain some of it was! But some, smooth as silk).

Using a draw-knife to remove the groove

The main joinery of the doors is the mortise-and-tenon joints at each end of the four 45° braces. These are what take the main load of the cladding hanging on the outside. (It’s worth noting here how common it is for this style of door to have their braces put on the wrong way around — the idea is that the brace is in compression and for it to go up from where the hinge attaches; if it’s running from the hinge point downwards then it’s pretty much doing nothing, and the door will sag.)

Cutting the tenon cheeks:

Cutting a tennon

Some tenons, with the ledges behind:

Ledges and braces, cut to size

The mortises were easier than the tenons in some way, although they took longer. Mortises, cut by hand as I was doing, are basically an exercise in holding a chisel vertical and square, thumping it with a fair bit of strength, and moving it 2 mm before repeating.

One end of each mortise is cut at 45° where the brace comes in; the other is square and is where the main force of the door is held.

Finished mortice, with 45° at one end
Laying out number two door
Laying out number two door

Once the ledges and braces were done, the cladding was screwed on from the back with 40 mm stainless steel decking screws.

Screwing the cladding on

The boards were spaced with 2 mm gaps to account for timber movement, to prevent the doors from warping. The ends were docked square and to size once all the boards were on.

Spacer between the boards

The finished doors:

Both doors finished

Walls

The two side walls are 2.1 m high and about 400 mm wide. They’re painted treated-pine stud frames clad with more 19×60 Wandoo flooring.

They’re fixed to the slab below:

Bottom plate bolted to slab

And screwed to the beam above:

Top stud fixings

(The threaded rod in the background of the above is a tie to hold the top beam in its place when the force of the open doors is tending to pull it outwards.)

The cladding was put on with the same spacing as the doors:

Cladding the side panels

And when completed, had the place looking a fair bit closer to enclosed:

Cladding the side panels

Incomplete

Unfortunately, this is where it stops for now, because I’m having some hinges fabricated and they’re not yet done. As soon as they are, and the thirty bolts are doing their thing, I’ll post some photos of the finished project.

(By the way, I am surprisingly grateful to the Spark Project initiative for making me get off my bum and actually get to work on these doors.)


1.5

Published 17 Apr 2017 by mblaney in Tags from simplepie.

Merge pull request #510 from mblaney/master

Version bump to 1.5 due to changes to Category class.


Submitted a Plugin? Please Check Your Emails!

Published 16 Apr 2017 by Ipstenu (Mika Epstein) in Make WordPress Plugins.

Currently ~30% of all new plugins are approved within 7 days of submission.

Why so low? People don’t reply to their emails. We have over 100 plugins waiting on replies from developers. At this precise moment (10:30 am US Pacific Time, Sunday April 16) there are ZERO plugins pending review. That means everyone who submitted a plugin between April 1 and today has been emailed.

If you didn’t get the email, please go check your spam. Free email clients like Hotmail, Yahoo, and Google tend to file us as ‘automated’ emails, which is not true, but whatever. Put plugins@wordpress.org in your whitelist (actually put @wordpress.org in a filter to have it never treated as spam and always important) because if you’re not getting emails from WP and you’ve submitted a plugin or a theme, you’re going to have a bad time.

Again. Everyone’s been emailed. I promise. Check your emails. Drop us a line if you can’t find it. Remember to whitelist us.

#reminder


View a MediaWiki as it was at a given date?

Published 15 Apr 2017 by Wingblade in Newest questions tagged mediawiki - Stack Overflow.

Is there a way to make an entire MediaWiki appear as it stood at a given date, i.e. all pages automatically show the last revision before that date. Currently all I can do is scrolling through each page's revision history and selecting the right one manually, which is extremely inefficient.

Would be great if there was a way to do this live, if not I'm also open to making a dump of a wiki's state at a given date (afaik dumping software usually only grabs the current state).


Failed to use ParserFunctions on mediawiki

Published 14 Apr 2017 by Ying Wang in Newest questions tagged mediawiki - Stack Overflow.

I want to link a form to an edit page.

My code like this:

enter image description here

but the result is:

enter image description here

please help me ,thank u!


Interview on Stepping Off: Rewilding and Belonging in the South West

Published 14 Apr 2017 by Tom Wilson in tom m wilson.

You can listen to a recent radio interview I did about my new book with Adrian Glamorgan here.

Access nested JSON data on MediaWiki

Published 13 Apr 2017 by Shea Belsky in Newest questions tagged mediawiki - Stack Overflow.

I am trying to access the contents of a nested JSON from within a MediaWiki wiki. I have already researched the External Data extension, but it does not support nested JSON objects. It only works with one-dimensional objects, versus potentially nested properties.

Let's assume I want to work with the Chuck Norris API, for example. It returns a JSON in this form:

{
    "type": "success",
    "value": {
        "id": 334,
        "joke": "John Doe qualified with a top speed of 324 mph at the Daytona 500, without a car.",
        "categories": []
    }
}

I want to be able to access the contents of the value key in some meaningful form in MediaWiki. This functionality is not offered by the External Data extension, and I was wondering if there was another way that I could do it (another extension, writing custom PHP, writing custom JavaScript).


Kubuntu 17.04 Released!

Published 13 Apr 2017 by valorie-zimmerman in Kubuntu.

Codenamed “Zesty Zapus”, Kubuntu 17.04 continues our proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

The team has been hard at work through this cycle, introducing new features and fixing bugs.

Under the hood, there have been updates to many core packages, including a new 4.10-based kernel, KDE Frameworks 5.31, Plasma 5.9.4 and KDE Applications 16.12.3.

The Kubuntu Desktop has seen some exciting improvements, with newer versions of Qt, updates to major packages like Krita, Kdenlive, Firefox and LibreOffice, and stability improvements to the Plasma desktop environment.

For a list of other application updates, upgrading notes and known bugs be sure to read our release notes.

Download 17.04 or read about how to upgrade from 16.10.


Close encounters of the everyday kind

Published 13 Apr 2017 by in New Humanist Articles and Posts.

A new book asks if microdosing LSD could make you happier and more productive. Why might we need such a helping hand?

Tabulate 2.9.0

Published 13 Apr 2017 by Sam Wilson in Sam's notebook.

It turned out to be simpler than I’d thought to add the ENUM-modifying feature to Tabulate’s schema editor, so I’ve done it and released version 2.9.0.


Should Tabulate support ENUM columns?

Published 12 Apr 2017 by Sam Wilson in Sam's notebook.

I’m trying to figure out if it’s worthwhile adding better support for enumerated fields in Tabulate. MySQL’s ENUM type is useful when one has an immutable list of options such as days of the week, seasons of the year, planets in the solar system, or suits in a deck of cards. It can also be good for making ternary options more distinct and communicable than a nullable boolean field.

But really, I’ve never used them! I mean, I have in one place (which is why this is coming up for me at all, because I’m trying to do some work with an ancient database that I’ve pulled into WordPress + Tabulate) but I’ve never used them in any situation that I couldn’t have more easily solved by adding a cross-reference to another table.

Reference tables are far easier to work with, and allow other metadata to be attached to the referenced values (such as colour, in the card-suit example).

However, ENUMs are already supported by Tabulate for the display of data, so I guess I should just do the little extra bit of work required to add support to the table-structure editing as well. Even if no one uses it.

(On a related note, I don’t think SET fields are going to get the same treatment!)


Update on the April 11th SFO2 Power Outage

Published 12 Apr 2017 by DigitalOcean in DigitalOcean Blog.

On April 11th at 06:43 UTC, DigitalOcean's SFO2 region experienced an outage of compute and networking services. The catalyst of this incident was the failure of multiple redundant power distribution units (PDU) within the datacenter. Complications during the recovery effort prolonged the incident and caused intermittent failures of our control panel and API. We'd like to apologize, share more details about exactly what happened, and talk about how we are working to make sure it doesn't happen again.

The Incident

The initial power loss affected SFO2 including the core networking infrastructure for the region. As power and connectivity were restored, our event processing system was placed under heavy load from the backlog of in-progress events. The database backing this system was unable to support the load of the SFO2 datacenter recovery in addition to our normal operational load from other datacenters. This temporarily disabled our control panel and API. We then proceeded with recovery on multiple fronts.

Timeline of Events

06:15 UTC - A datacenter-level PDU in the building housing our SFO2 region suffered a critical failure. Hardware automatically began drawing power from a secondary PDU.

06:40 UTC - The secondary PDU also suffered a failure.

06:43 UTC - Multiple alerts indicated that SFO2 was unreachable and initial investigations were undertaken by our operations and network engineering teams.

07:00 UTC - After finding that all circuits in the region were down, we opened a ticket with the facility operator.

07:49 UTC - A DigitalOcean datacenter engineer arrived and confirmed the power outage.

08:27 UTC - The facility operations staff arrived and began restoring power to the affected racks.

09:04 UTC - Recovery commenced and both management servers and hypervisors containing customer Droplets began to come back online.

09:49 UTC - After an initial "inception problem" where portions of our compute infrastructure which were self-hosted couldn't bootstrap themselves, services began to recover.

09:53 UTC - Customer reports and alerts indicated that our control panel and API had become inaccessible. Our event processing system became overloaded attempting to process the backlog of pending events while also supporting the normal operational load of our other regions. Work commenced to slow-roll activation of services.

16:32 UTC - All services activated in SFO2 and event processing re-enabled; customers able to start deploying new Droplets. Existing Droplets not yet restarted. Work began to re-start Droplets in controlled way.

19:43 UTC - 50% of all Droplets restored.

20:15 UTC - All Droplets and services fully restored.

Future Measures

There were a number of major issues that contributed to the cause and duration of this outage and we are committed to providing you with the stable and reliable platform you require to launch, scale, and manage your applications.

During this incident, we were faced with conditions from our provider that were outside of our control. We're working to implement stronger safeguards and validation of our power management system to ensure this power failure does not reoccur.

In addition, we're conducting a review of our datacenter recovery procedures to ensure that we can move more quickly in the event that we do lose power to an entire facility.

Finally, we will be adding additional capacity to our event processing system to ensure it is able to sustain significant peaks in load, such as the one that occurred here.

In Conclusion

We wanted to share the specific details around this incident as quickly and accurately as possible to give you insight into what happened and how we handled it. We recognize this may have had a direct impact on your business and for that we are deeply sorry. We will be issuing SLA credits to affected users, which will be reflected on their May 1st invoice, and we will continue to explore better ways of mitigating future customer impacting events. The entire team at DigitalOcean thanks you for your understanding and patience.


How to add a specific line on top of each page (Mediawiki)

Published 12 Apr 2017 by Chris Dji in Newest questions tagged mediawiki - Stack Overflow.

i wanted to add the VoteNY extension to my wiki, so all users can rate each other. This extensions needs the line

<vote type="1"></vote>

on each page that should be voteable. Now i am trying to initially write these line on top of each page (~10000 pages) programmatically. i already found a hook to add the line when a new article is created. But every page should be have this line.

This is what i have so far:

<?php
$TitleList = [];    
# get the edit ticket
$jsn = file_get_contents('http://wiki/api.php?action=query&meta=tokens&format=json');
$json_a = json_decode($jsn, true);
$token = $json_a['query']['tokens']['csrftoken'];

# populate the list of all pages
$urlGET = 'http://wiki/api.php?action=query&list=allpages&format=json';
$json = file_get_contents($urlGET);
$json_b = json_decode( $json ,true);
foreach  ($json_b['query']['allpages'] as $page)
{
    array_push($TitleList, $page['title']);
}

# //add the line on top of every page
foreach ($TitleList as $title)
{
    $data = array(
        'action'      => 'edit',
        'title'       => $title,
        'prependtext' => '<vote type="1"></vote>',
        'token'       => $token);
    # Create a connection
    $url = 'http://wiki/api.php?';
    $ch = curl_init($url);
    # Form data string, automatically urlencode
    $postString = http_build_query($data, '', '&');
    # Setting our options
    curl_setopt($ch, CURLOPT_POST, 1);
    curl_setopt($ch, CURLOPT_POSTFIELDS, $postString);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    # Get the response
    $response = curl_exec($ch);
    curl_close($ch);            
    echo($response);    
}

This is functionable so far. But i recognized that &list=allpages didn't give me all pages (40% are missing) of my wiki.


How to load MathJax in a MediaWiki wiki through Common.js?

Published 11 Apr 2017 by alpha in Newest questions tagged mediawiki - Stack Overflow.

I have a MediaWiki wiki running fine. I also have MathJax on my root folder, and it works fine (the test files work, and I am already using it in some static pages).

I am trying to get the wiki to load MathJax globally (without using extensions), and though I see "MathJax.js" loading fine in the developer tools, all instances of LaTeX (e.g. $e^x$) are left as is.

Here is what I have in my Common.js:

$(function() {
importScriptURI('<correct path>/MathJax/MathJax.js?config=MML_HTMLorMML-full');
});

What could I be doing wrong?


MediaWiki Syntaxhighlighter on Windows Server

Published 11 Apr 2017 by user4333011 in Newest questions tagged mediawiki - Stack Overflow.

We've installed Syntaxhighlighter for MediaWiki on Windows Server 2012 R2 according to the directions here:

https://www.mediawiki.org/wiki/Extension:SyntaxHighlight#Installation

and it is not working. I suspect it has something to do with the line which shows how to make the pygmentize binary executable in Linux, and is completely silent on what needs to be done in a Windows environment. I'm not incredibly familiar with Python, but the environment variable is set and in the command line I can run pygmentize in it's directory if I add the .py extension, but that doesn't fix the issues with Syntaxhighlighter whatsoever. Without the .py extension windows doesn't see it as an executable.

So the question is: what do I do on Windows Server to make pygmentize an executable that can be used by MediaWiki's syntaxhighlighter to effectively highlight syntax?

Or maybe I'm incorrect and that isn't the issue, in which case I welcome any insights!


Multilingual Mediawiki installation using Wiki Family Vs single multilingual MediaWiki Extension

Published 10 Apr 2017 by Atef Wagih in Newest questions tagged mediawiki - Stack Overflow.

I am trying to setup a multilingual encyclopedia (4 languages), where I can have both:

As the wiki grows, I understand that the content of each language can vary.

However, I want to be able to work as fluently as possible between languages.

I checked this article, dating back to 2012, which has a comment from Tgr that basically condemns both solutions.

I also checked this Mediawiki Help Article, but it gives no explanation about the differences between both systems.

My questions are:

1- what is the preferred option now for a multilingual wiki environment that gives the most capabilities and best user experience, given that some of the languages I want are right to left, and some are left to right. So I want the internationalization of category names, I need to link the categories their corresponding translations, and want users to see the interface in the language that the article is written in.

So Basically as if I have 4 encyclopedias, but the articles are linked to their corresponding translations.

2- Which system would give me a main page per language? So the English readers would see an English homepage, and the French readers see a French homepage..etc?

EDIT:

I have a dedicated server, so the limitation of shared hosting is not there.

Thank you very much.


How we invented nature

Published 10 Apr 2017 by in New Humanist Articles and Posts.

Today we take it for granted that something called “nature” exists. But the concept owes much to a Prussian adventurer

Wikimania submisison: apt install mediawiki

Published 9 Apr 2017 by legoktm in The Lego Mirror.

I've submitted a talk to Wikimania titled apt install mediawiki. It's about getting the MediaWiki package back into Debian, and efforts to improve the overall process. If you're interested, sign up on the submissions page :)


Title and URL of articles in site based in Wikimedia automatically changed to lowercase

Published 9 Apr 2017 by Kwon in Newest questions tagged mediawiki - Stack Overflow.

I can't speak English very well. So I ask for your understanding.

I have a one site based in Wikimedia.

And.. I have a problem. When I write articles, page automatically modified to uppercase (first alphabet).

Like this: iPhone => IPhone

Please click the link.

enter image description here

I want to my site work like Wiktionary. Is it possible?


Kubuntu 17.04 Release Candidate – call for testers

Published 9 Apr 2017 by clivejo in Kubuntu.

Today the Kubuntu team is happy to announce that Kubuntu Zesty Zapus (17.04) RC is released . With this release candidate, you can see and test what we are preparing for 17.04, which we will be releasing April 13, 2017.

NOTE: This is a release candidate. Kubuntu pre-releases are NOT recommended for:

* Regular users who are not aware of pre-release issues
* Anyone who needs a stable system
* Anyone uncomfortable running a possibly frequently broken system
* Anyone in a production environment with data or work-flows that need to be reliable

Getting Kubuntu 17.04 RC:
* Upgrade from 16.10: run `do-release-upgrade -d` from a command line.
* Download a bootable image (ISO) and put it onto a DVD or USB Drive : http://cdimage.ubuntu.com/kubuntu/daily-live/20170408/

Please see Release Notes for more details, where to download, and known problems. We welcome help to fix those final issues; please join the Kubuntu-Devel mail list[1], just hop into #kubuntu-devel on freenode to connect with us or use the Ubuntu tracker [2]

1. Kubuntu-devel mail list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

2. Official Ubuntu tracker: http://iso.qa.ubuntu.com/


Regexp assistance needed parsing mediawiki template with Javascript

Published 8 Apr 2017 by BrianFreud in Newest questions tagged mediawiki - Stack Overflow.

I'm handling Mediawiki markup with Javascript. I'm trying to remove certain parameters. I'm having trouble getting to exactly the text, and only the text, that I want to remove.

Simplified down, the template text can look something like this:

{{TemplateX
| a =
Foo bar
Blah blah

Fizbin foo[[domain:blah]]

Ipsum lorem[[domain:blah]]
|b =1
|c = 0fillertext
|d = 1alphabet
| e =
| f = 10: One Hobbit
| g = aaaa, bbbb, cccc, dddd, し、1 =小さい、2 =標準、3 =大
|h = 15000
|i = -15000
| j = Level 4 [[domain:filk|Songs]]
| k =7 fizbin, 8 [[domain:trekkies|Shatners]]
|l = 
|m = 
}}

The best I've come up with so far is

/\|\s?(a|b|d|f|j|k|m)([^][^\n\|])+/gm

Updated version:

/\|\s?(a|b|d|f|j|k|m)(?:[^\n\|]|[.\n])+/gm

which gives (with the updated regexp):

{{TemplateX


|c = 0fillertext

| e =

| g = aaaa, bbbb, cccc, dddd
|h = 15000
|i = -15000

|Songs]]

|Shatners]]
|l = 

But what I'm trying to get is:

{{TemplateX
|c = 0fillertext
| e =
| g = aaaa, bbbb, cccc, dddd
|h = 15000
|i = -15000
|l = 
}}

I can deal with the extraneous newlines, but I still need to make sure that '|Songs]]' and '|Shatners]]' are also matched by the regexp.

Regarding Tgr's comment below,

For my purposes, it is safe to assume that every parameter starts on a new line, where | is the first character on the line, and that no parameter definition includes a | that isn't within a [[foo|bar]] construct. So '\n|' is a safe "start" and "stop" sequence. So the question boils down to, for any given params (a,b,d,f,j,k, and m in the question), I need a regex that matches 'wanted param' in the following:

| [other param 1] = ... 
| [wanted param] = possibly multiple lines and |s that aren't after a newline
| [other param 2]

EDIT: Final working regexp:

/\|\s?(a|b|d|f|j|k|m)((?![\r\n]+\|)[\W\w](?!}}))+/gm

Robin Mackenzie's answer got it 95% of the way, but it had problems with unicode in the values or if there was an '=' in a value. The above version addresses both of those problems. https://regex101.com/r/HHCzeV/3 has a functional example.


Failover in local accounts

Published 7 Apr 2017 by MUY Belgium in Newest questions tagged mediawiki - Server Fault.

I would like to use mediawiki as documentation with access privileges. I use the LdapAuthentication extension (here : https://www.mediawiki.org/wiki/Extension:LDAP_Authentication/Configuration_Options ) in order to get user authenticated against a LDAP.

For various reason, the authentication should continue working even if the LDAP fails.

How can I get a fail-over (for example using the passwords in the local SQL database?) which should enable the wiki to remains accessible even if infrastructure fails?


Shiny New History in China: Jianshui and Tuanshan

Published 6 Apr 2017 by Tom Wilson in tom m wilson.

  The stones in this bridge are not all in a perfect state of repair.  That’s part of its charm.  I’m just back from a couple of days down at Jianshui, a historic town a few hours south of Kunming with a large city wall and a towering city gate.  The trip has made me reflect on […]

Adding external JavaScripts to MediaWiki 1.28

Published 6 Apr 2017 by user3103115 in Newest questions tagged mediawiki - Stack Overflow.

In my current 1.25.1 MediaWiki setup, I have a lot of external JavaScrips like bxslider, qtip, datatables, etc. embedded into the header through a very intrusive way.

I simply added

$out->addHeadItem('danalytics','<script src="https://aionpowerbook.com/bxslider/jquery.bxslider.min.js"></script><link href="https://aionpowerbook.com/bxslider/jquery.bxslider.css" rel="stylesheet" />
<script>
$(document).ready(function(){
$(\'.bxslider\').bxSlider({
captions: true,
auto: ($(".bxslider li").length > 1) ? true: false,
pager: ($(".bxslider li").length > 1) ? true: false,
speed: 6000,
infiniteLoop: true,
});});
$(document).ready(function(){
$(\'.bxslidermap\').bxSlider(
);});
</script>
');

and the rest of the scripts into SkinMonoBook.php just after

$out->addStyle( $this->stylename . '/IE70Fixes.css', 'screen', 'IE 7' );

I know I wasn't supposed to touch core files, but it worked, and I was fine with it.

Anyway, recently I have been trying to update the MediaWiki software to 1.28.0 but no matter how I try to implement all the JavaScripts back, I get

jQuery is not defined
$ is not defined

Sometimes it works, sometimes it doesn't. I have no idea what is wrong.

I tried this: How to add external <script> to <head> section for all mediawiki pages?

Didn't work. This was kinda obvious as the MediaWiki's jquery is loaded at the bottom (I think?) but even after adding jquery library before any of my JS it would only work sometimes.

I also use the PageDisqus extension so I thought I would just copy/paste my external JavaScripts into the PageDisqus' code. That extension also loads javascrips on every page, and it seems to always work so I thought why not. But again sometimes it works, sometimes it doesn't.

here is an example:

https://aionpowerbook.com/pb_new/index.php?title=Main_Page

the slider at the top only sometimes loads, sometimes I need to do a hard refresh to make it load but usually on one browser it works, on another I keep getting "ReferenceError: jQuery is not defined".

Here I also tried adding my own jquery before anything else but nope, still getting

$(...).bxSlider is not a function

from time to time.

Any help will be appreciated as I am out of ideas here.


Plugin Submissions Open

Published 6 Apr 2017 by Ipstenu (Mika Epstein) in Make WordPress Plugins.

It took a little longer than expected or desired, due to a couple serious issues. Not everything is perfect yet, but submissions are open.

Some things are different. Most are the same.

Zips are required (not links)

Instead of a link, you will now upload your zip directly. In doing so, the plugin slug will be assigned for you. If your zip isn’t valid, if your plugin headers aren’t valid, you’ll get an error message. Read it carefully. If you get a message that says your plugin URL and Author URL can’t be the same, that’s exactly what it means.

* Plugin URI: https://example.com/plugin
* Author URI: https://example.com/

Those have to be different.

Slugs are determined from plugin headers

That’s right, the name you get is based on your Plugin Headers. That means this:

* Plugin Name: My Cool Plugin

That will give you a plugin slug of my-cool-plugin and that will get you an email from us telling you not to use “Plugin” in your URL, and is “my-cool” okay? Please do read the emails and reply properly. If it’s a simple typo, we’ll just fix it for you. If we’re not sure what’s best, we’ll email.

No more 7-day rejections

Since the new and pending queues are now split, we won’t be rejecting plugins after seven days. If you don’t reply to the email, your plugin just sits there and waits for you. But the good news here is that you can’t resubmit. You’ll get a warning message that the plugin already exists.

Release and iterate

All of this is going to be iterated on and improved. We have goals to make the add page list everything you have pending, for example, and we’ll be implementing a limit of how many plugins you can upload at once to prevent things like one person submitting ten at once. Since our queue is rarely more than a week, that’s no hardship.

I’m primarily concentrating on making the back-end of the directory work better and documenting the process so we can get down to the business of expanding the team. Divide and conquer as they say.

What about new reviewers?

Soon! This year! Hopefully by summer. My modest goals are as follows:

  1. Get all the current reviewers up to speed
  2. Invite a couple select people to join as guinea pigs
  3. Train them up good and work out the kinks in that
  4. Post an open call for new members
  5. Accept some and train them
  6. Go back to step 4

By limiting how many new members we take at a time, we can release and iterate our training methods and documentation faster.

As always, remember to whitelist the plugins@wordpress.org email address and follow this blog for updates.

Thank you for your patience!


WCAG Accessibility Conformance Testing (ACT)

Published 6 Apr 2017 by Shadi Abou-Zahra in W3C Blog.

Today W3C published a First Public Working Draft of the Accessibility Conformance Testing (ACT) Rules Format 1.0 specification. It defines a common approach for writing test rules for the Web Content Accessibility Guidelines (WCAG). This allows people to document and share testing procedures, including automated, semi-automated, and manual procedures. This accelerates the development of evaluation and repair tools, based on clearly documented testing procedures.

The ACT Rules Format will be an important step forward in web accessibility evaluation and repair, following many years of prior work. Earlier work conducted at W3C includes the Techniques for Accessibility Evaluation and Repair Tools, which described evaluation approaches used by some tools, and the Evaluation and Report Language (EARL) intended to support data interchange between tools. These were used in the creation of the Accessibility Support Database, intended to be a crowd-sourced repository of information about support for WCAG 2.0 Techniques and in many cases populated with the aid of evaluation tools.

These resources have helped to shape the landscape of web accessibility evaluation, but interoperable results has not yet been achieved. Interoperable evaluation is important because of a large number of policies that require conformance, in some fashion, to WCAG 2.0. When different tools report different results about conformance, site owners and users are confused about the actual conformance status.

To tackle this problem, the Automated WCAG Monitoring Community Group was formed in 2014 to incubate further work. Working together with representatives of many accessibility evaluation organizations, this group has been collecting a set of “rules” that describe concrete and unambiguous testing procedures. To ensure these rules could be understood by any tool, a need for a common framework was identified. The Accessibility Conformance Testing Task Force was formed to develop this framework as a specification, and the ACT Rules Format is the result.

With the ACT Rules Format in place, it will be possible to build an open repository of evaluation rules. Some organizations have already opened up or committed to opening up their repositories. This leads to a better common understanding of the technical requirements for web accessibility.

But before we can dream of such a repository we must first get the ACT Rules Format firmed up. Specifically, how can this help you to share and use testing procedures? Please let us know your thoughts on this document directly via GitHub or email.

Find out more about ACT and how to comment, contribute, and participate!


Migration/ Import MediaWiki 1.20.8 to Confluence 5.9.4

Published 6 Apr 2017 by T. Nagel in Newest questions tagged mediawiki - Stack Overflow.

we want to import our MediaWiki into Confluence.
In my research I only find the UWC(=Universal Wiki Converter), which is no longer supportet by Atlassian.

Is there any way of an easy migration? Another tool, another application or can I still use the UWC, even it isn't supported?


The Mpemba effect, ant navigation, and the mystery of pure gold

Published 6 Apr 2017 by in New Humanist Articles and Posts.

Chemistry, Biology, Physics: Three scientists talk through big recent developments in their fields.

User's access to category mediawiki

Published 5 Apr 2017 by Ivan Ivanov in Newest questions tagged mediawiki - Stack Overflow.

How can I restrict access to category in mediawiki? I need to create category with pages which will accessable only for sysop. I'm trying to use:

Extension:Restrict_access_by_category_and_group - not working

Extension:CategoryPermissions - blocked all categoryes, except what I need


How retrieve wikipage text in XML format of "text/x-wiki" format

Published 5 Apr 2017 by Laurens Voncken in Newest questions tagged mediawiki - Stack Overflow.

Question:

I have been trying to extract wiki page information using the Mediawiki RESTfull API. I seem to retrieve the page text in a "text/x-wiki" format, but I require XML elements if I want to transform the data in Talend.

Is it possible to retrieve Mediawiki query results in a full XML format (so without the text/x-wiki)?

[Wrong] Example text/x-wiki format:

<format>text/x-wiki</format>
<text xml:space="preserve" bytes="952">
   {{Handelingsperspectief
   |Context=OLmK Naasten - Contact met lotgenoten,
   |Intentional Element decomposition type=IOR
   }}
</text>

[Right] Example XML format:

<format>xml</format>
<text xml:space="preserve" bytes="952">
   <Handelingsperspectief>
    <Context>OLmK Naasten - Contact met lotgenoten</context>
    <Intentional_Element_decomposition_type>IOR</Intentional_Element_decomposition_type>
   </Handelingsperspectief>
</text>

Context/Situation:

As part of a research for creating a "interactive 3D narrative experience" (a game) that stimulates human understanding, to support solve society problems, there must be extracted wiki content. This wiki content comes from a (semantic) Mediawiki. This wiki expresses the Expertise Management ontology (EMont), an ontology for describing human interaction, within certain conditions. The wiki expresses the ontology in an object-oriented paradigm, each page represents an instance of the EMont elements.


Reminder: How SVN on WordPress.org Works

Published 5 Apr 2017 by Ipstenu (Mika Epstein) in Make WordPress Plugins.

Now that the new directory is out, it’s time for a couple quick reminders on how the SVN repositories work on WordPress. We have documentation on how SVN works here, but the information can be overwhelming.

Use readme.txt (not .MD)

A readme.md file is not the same as our readme.txt format. If you try to use one and expect everything to work right, you’ll have a bad day.

Your Stable Tag matters

This has always been the case, but it’s now more important than ever. If you say that your plugin’s stable tag is 1.2.3 but you do not have a /tags/1.2.3/ folder, your plugin will absolutely not behave as expected. If you’re not using tags folders, your stable tag should be trunk and that’s it. (But we would rather you use tags.)

Don’t use a folder for your MAIN files

Do not put your main plugin file in a subfolder of trunk, like /trunk/my-plugin/my-plugin.php as that will break downloads. You may use subfolders for included files.

The Assets folder is special

We have a dedicated folder for your plugin screenshots and banners. Want a faster, smaller, plugin? Put your assets in /assets/ and not /trunk/assets/ please. Your users will thank you. Screenshots and banner images go in that folder. It’s special. Use it wisely.

SVN is a release repository

One of the guidelines is that frequent commits to your plugin should be avoided.

Unlike Git, our SVN repository is a release repository, not a development one.  Every single commit triggers a regeneration of the zip files associated with the plugin. All the zips. That can be pretty brutal on the system. No one likes it when a plugin download breaks because the server’s down. Please be nice to our servers.

It’s okay to update your readme within reason

That said, if you need to update your readme to fix a critical public typo, do it. And if you want to update the readme to bump the version of WordPress you’ve tested up to, that’s fine too. Just keep it to the major releases. A plugin that is tested on 4.7 will show as tested up to 4.7.2 as well, after all.

 


Update on the April 5th, 2017 Outage

Published 4 Apr 2017 by DigitalOcean in DigitalOcean Blog.

Today, DigitalOcean's control panel and API were unavailable for a period of four hours and fifty-six minutes. During this time, all running Droplets continued to function, but no additional Droplets or other resources could be created or managed. We know that you depend on our services, and an outage like this is unacceptable. We would like to apologize and take full responsibility for the situation. The trust you've placed in us is our most important asset, so we'd like to share all of the details about this event.

At 10:24 AM EDT on April 5th, 2017, we began to receive alerts that our public services were not functioning. Within three minutes of the initial alerts, we discovered that our primary database had been deleted. Four minutes later we commenced the recovery process, using one of our time-delayed database replicas. Over the next four hours, we copied and restored the data to our primary and secondary replicas. The duration of the outage was due to the time it took to copy the data between the replicas and restore it into an active server.

At 3:20 PM EDT the primary database was completely restored, and no data was lost.

Timeline of Events

Future Measures

The root cause of this incident was a engineer-driven configuration error. A process performing automated testing was misconfigured using production credentials. As such, we will be drastically reducing access to the primary system for certain actions to ensure this does not happen again.

As noted above, duration of the incident was primarily influenced by the speed of our network while reloading the data into our database. While it should be a rare occurrence that this type of action would happen again, we are in the process of upgrading our network connectivity between database servers and also updating our hardware to improve the speed of recovery. We expect these improvements to be completed over the next few months.

In Conclusion

We wanted to share this information with you as soon as possible so that you can understand the nature of the outage and its impact. In the coming days, we will continue to assess further safeguards against developer error, work to improve our processes around data recovery, and explore ways to provide better real time information during future customer impacting events. We take the reliability of our service seriously and are committed to delivering a platform that you can depend on to run your mission-critical applications. The entire team at DigitalOcean thanks you for your understanding and, again, we apologize for the impact of this incident.


Some extensions of MediaWiki does not work

Published 4 Apr 2017 by user7814815 in Newest questions tagged mediawiki - Stack Overflow.

I have installed MediaWiki CMS from Debian. Before update OS from Squeeze to Jessie, all extensions worked fine. After the upgrade, some of the extensions stopped working: Gadgets, WikiEditor. The "Gadgets" extension is in the list of extensions used on the version page. The Gadgets configuration page is in the list of seperate pages, as well as in the user settings in full. But on the pages with wiki articles, gadgets do not appear (for example Gadget-UTCLiveClock). I do not see the call to this script in the source code of the page.

enter image description here

<script>(window.RLQ=window.RLQ||[]).push(function(){mw.config.set({"wgCanonicalNamespace":"","wgCanonicalSpecialPageName":false,"wgNamespaceNumber":0,"wgPageName":"Заглавная_страница","wgTitle":"Заглавная страница","wgCurRevisionId":774,"wgRevisionId":774,"wgArticleId":1,"wgIsArticle":true,"wgIsRedirect":false,"wgAction":"view","wgUserName":"Ярослав","wgUserGroups":["bureaucrat","svnadmins","sysop","*","user","autoconfirmed"],"wgCategories":[],"wgBreakFrames":false,"wgPageContentLanguage":"ru","wgPageContentModel":"wikitext","wgSeparatorTransformTable":[",\t."," \t,"],"wgDigitTransformTable":["",""],"wgDefaultDateFormat":"dmy","wgMonthNames":["","январь","февраль","март","апрель","май","июнь","июль","август","сентябрь","октябрь","ноябрь","декабрь"],"wgMonthNamesShort":["","янв","фев","мар","апр","май","июн","июл","авг","сен","окт","ноя","дек"],"wgRelevantPageName":"Заглавная_страница","wgRelevantArticleId":1,"wgRequestId":"dfc9594aa741e7ad292fdee8","wgUserId":2,"wgUserEditCount":1249,"wgUserRegistration":1347855570000,"wgUserNewMsgRevisionId":null,"wgIsProbablyEditable":true,"wgRestrictionEdit":["sysop"],"wgRestrictionMove":["sysop"],"wgIsMainPage":true,"wgWikiEditorEnabledModules":{"toolbar":true,"dialogs":true,"preview":false,"publish":false},"wgCategoryTreePageCategoryOptions":"{\"mode\":0,\"hideprefix\":20,\"showcount\":true,\"namespaces\":false}"});mw.loader.state({"site.styles":"ready","noscript":"ready","user.styles":"ready","user.cssprefs":"ready","user":"ready","user.options":"loading","user.tokens":"loading","mediawiki.legacy.shared":"ready","mediawiki.legacy.commonPrint":"ready","mediawiki.sectionAnchor":"ready","mediawiki.skinning.interface":"ready","skins.vector.styles":"ready"});mw.loader.implement("user.options@0edrpvk",function($,jQuery,require,module){mw.user.options.set({"gender":"male","timecorrection":"ZoneInfo|420|Asia/Barnaul","gadget-DotsSyntaxHighlighter":"1","gadget-HotCat":"1","gadget-UTCLiveClock":"1","gadget-Wikilinker":"1","gadget-addThisArticles":"1","gadget-popups":"1","gadget-preview":"1","usebetatoolbar":"1","usebetatoolbar-cgd":"1","watchlisttoken":"SOME_TOKEN_1"});});mw.loader.implement("user.tokens@USER_TOKENS",function ( $, jQuery, require, module ) {mw.user.tokens.set({"editToken":"SOME_TOKEN_2","patrolToken":"SOME_TOKEN_3","watchToken":"SOME_TOKEN_4","csrfToken":"SOME_TOKEN_5"});/*@nomin*/;});mw.loader.load(["mediawiki.page.startup","skins.vector.js"]);});</script>

<script>(window.RLQ=window.RLQ||[]).push(function(){mw.loader.load(["mediawiki.action.view.postEdit","site","mediawiki.user","mediawiki.hidpi","mediawiki.page.ready","mediawiki.searchSuggest","mediawiki.page.watch.ajax","ext.gadget.referenceTooltips","ext.gadget.directLinkToCommons","ext.gadget.preview","ext.gadget.popups","ext.gadget.addThisArticles","ext.gadget.HotCat","ext.gadget.Wikilinker","ext.gadget.DotsSyntaxHighlighter","ext.gadget.UTCLiveClock"]);});</script><script>(window.RLQ=window.RLQ||[]).push(function(){mw.config.set({"wgBackendResponseTime":385});});</script>

In mirror version (Debian Squeee, MediaWiki 1.25) all work.

enter image description here

<script>if(window.mw {mw.config.set({"wgCanonicalNamespace":"","wgCanonicalSpecialPageName":false,"wgNamespaceNumber":0,"wgPageName":"Заглавная_страница","wgTitle":"Заглавная страница","wgCurRevisionId":774,"wgRevisionId":774,"wgArticleId":1,"wgIsArticle":true,"wgIsRedirect":false,"wgAction":"view","wgUserName":"Ярослав","wgUserGroups":["bureaucrat","svnadmins","sysop","*","user","autoconfirmed"],"wgCategories":[],"wgBreakFrames":false,"wgPageContentLanguage":"ru","wgPageContentModel":"wikitext","wgSeparatorTransformTable":[",\t."," \t,"],"wgDigitTransformTable":["",""],"wgDefaultDateFormat":"dmy","wgMonthNames":["","январь","февраль","март","апрель","май","июнь","июль","август","сентябрь","октябрь","ноябрь","декабрь"],"wgMonthNamesShort":["","янв","фев","мар","апр","май","июн","июл","авг","сен","окт","ноя","дек"],"wgRelevantPageName":"Заглавная_страница","wgUserId":2,"wgUserEditCount":885,"wgUserRegistration":1347855570000,"wgUserNewMsgRevisionId":null,"wgIsProbablyEditable":true,"wgRestrictionEdit":["sysop"],"wgRestrictionMove":["sysop"],"wgIsMainPage":true,"wgWikiEditorEnabledModules":{"toolbar":true,"dialogs":true,"hidesig":true,"preview":false,"previewDialog":false,"publish":false},"wgCategoryTreePageCategoryOptions":"{\"mode\":0,\"hideprefix\":20,\"showcount\":true,\"namespaces\":false}"});}</script><script>if(window.mw){mw.loader.implement("user.options",function($,jQuery){mw.user.options.set({"ccmeonemails":0,"cols":80,"date":"default","diffonly":0,"disablemail":0,"editfont":"default","editondblclick":0,"editsectiononrightclick":0,"enotifminoredits":0,"enotifrevealaddr":0,"enotifusertalkpages":1,"enotifwatchlistpages":1,"extendwatchlist":0,"fancysig":0,"forceeditsummary":0,"gender":"unknown","hideminor":0,"hidepatrolled":0,"imagesize":2,"math":0,"minordefault":0,"newpageshidepatrolled":0,"nickname":"","norollbackdiff":0,"numberheadings":0,"previewonfirst":0,"previewontop":1,"rcdays":7,"rclimit":50,"rows":25,"showhiddencats":0,"shownumberswatching":1,"showtoolbar":1,"skin":"vector","stubthreshold":0,"thumbsize":5,"underline":2,"uselivepreview":0,"usenewrc":0,"watchcreations":1,"watchdefault":1,"watchdeletion":0,"watchlistdays":3,"watchlisthideanons":0,"watchlisthidebots":0,"watchlisthideliu":0,"watchlisthideminor":0,"watchlisthideown":0,"watchlisthidepatrolled":0,"watchmoves":0,"watchrollback":0,"wllimit":250,"useeditwarning":1,"prefershttps":1,"mathJax":false,"language":"ru","variant-gan":"gan","variant-iu":"iu","variant-kk":"kk","variant-ku":"ku","variant-shi":"shi","variant-sr":"sr","variant-tg":"tg","variant-uz":"uz","variant-zh":"zh","searchNs0":true,"searchNs1":false,"searchNs2":false,"searchNs3":false,"searchNs4":false,"searchNs5":false,"searchNs6":false,"searchNs7":false,"searchNs8":false,"searchNs9":false,"searchNs10":false,"searchNs11":false,"searchNs12":false,"searchNs13":false,"searchNs14":false,"searchNs15":false,"gadget-referenceTooltips":1,"gadget-HotCat":"1","gadget-UTCLiveClock":"1","gadget-addThisArticles":"1","gadget-popups":"1","gadget-preview":"1","timecorrection":"ZoneInfo|360|Asia/Novosibirsk","usebetatoolbar":"1","usebetatoolbar-cgd":"1","watchlisttoken":"SOME_TOKEN_1"});},{},{});mw.loader.implement("user.tokens",function($,jQuery){mw.user.tokens.set({"editToken":"SOME_TOKEN_2","patrolToken":"SOME_TOKEN_3","watchToken":"SOME_TOKEN_4"});},{},{});/* cache key: sitedb-mediawiki_:resourceloader:filter:minify-js:7:SOME_ROKEN */}</script>

<script>if(window.mw){mw.config.set({"wgBackendResponseTime":320});}</script>

I installed MediaWiki 1.28 from the official site from the tar archive to a new directory and a new schema, without additional extensions. I set up Gadget-UTCLiveClock, but it still does not work on the article page.

There are no errors in the logs.

What could be the problem?

MediaWiki 1.28.0 PHP 5.6.30-0+deb8u1 (apache2handler) MySQL 5.5.54-0+deb8u1 Debian Jessie 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1+deb8u2 (2017-03-07) x86_64 GNU/Linux


Introducing Monitoring: Insight into Your Infrastructure

Published 3 Apr 2017 by DigitalOcean in DigitalOcean Blog.

Over the lifecycle of your application, knowing when and why an issue in production occurs is critical. At DigitalOcean, we understand this and want to enable developers to make informed decisions about scaling their infrastructure. That's why we are excited to announce our new Monitoring service, available today for free with all Droplets. It gives you the tools to resolve issues quickly by alerting you when one occurs and giving you the information you need to understand it.

Monitoring the applications you've deployed should be as simple and intuitive as the rest of the DigitalOcean experience. Earlier this year, we released an open source agent and improved graphs that give you a better picture of the health of your Droplets. That was just the first piece of the puzzle. The agent offers greater visibility into your infrastructure, and now Monitoring will let you know when to act on that information.

Monitoring is natively integrated with the DigitalOcean platform and can be enabled at no extra cost by simply checking a box when creating your Droplets. It introduces new alerting capabilities using the metrics collected by the agent, allowing your team to receive email or Slack notifications based on the resource utilization and operational health of your Droplets.

View Graphs & Statistics

The Monitoring service exposes system metrics and provides an overview of your Droplets' health. The metrics are collected at one-minute intervals and the data is retained for a month, enabling you to view both up-to-the-minute and historical data. The improved Droplet graphs allow you to visualize how your instances are performing over time.

The following metrics are currently available:

Create Alert Policies

You can create alert policies on any of your metrics to receive notifications when the metric crosses your specified threshold. An alert policy monitors a single metric over a time period you specify. Alerts are triggered when the state is above or below your threshold for the specified time period. You can leverage DigitalOcean tags to group your Droplets based on your project or environment. Then you can apply the alert policy to specific Droplets or groups of tagged Droplets.

Alert policies can be created from the Monitoring tab in the DigitalOcean control panel:

Create alert

You can find more information about creating alert policies in this tutorial on the DigitalOcean Community site.

Configure Notifications

When you set up an alert policy, you will be able to choose between two notification methods:

Slack notifications

You'll receive notifications both when an alert threshold has been exceeded and when the issue has been resolved. Getting Started To enable Monitoring on your Droplets, you'll need to have the agent installed. On new Droplets, it's as simple as clicking the Monitoring checkbox during Droplet creation.

Enable monitoring

On existing Droplets, you can install the agent by running:

curl -sSL https://agent.digitalocean.com/install.sh | sh

Find more information on the agent itself in this tutorial on the DigitalOcean Community site.

Coming Soon

With the first iteration of our Monitoring service out the door, we're already working on what's next. Some features you will see soon include:

From alerting on issues to visualizing metrics, we want to provide you with the tools you need to monitor the health and performance of your applications in production. We'd love to hear your feedback. What metrics are important for your team? How can we help integrate Monitoring into your workflow? Let us know in the comments or submit a suggestion on our UserVoice page.


Berlusconi and post-truth politics, the Rojava experiment, and the future of humanity

Published 3 Apr 2017 by in New Humanist Articles and Posts.

The best long-reads from the New Humanist this month.

Charles Taylor: How to win the argument

Published 3 Apr 2017 by in New Humanist Articles and Posts.

Community and tradition don’t have to be set against migration, change and difference, argues the philosopher Charles Taylor.

Extract wikipedia articles belonging to a category from offline dumps

Published 3 Apr 2017 by p.j in Newest questions tagged mediawiki - Stack Overflow.

I have wikipedia article dumps in different languages. I want to filter them with articles which belong to a category(specifically Category:WikiProject_Biography)

I could get a lot of similar questions for example:

  1. Wikipedia API to get articles belonging to a category
  2. How do I get all articles about people from Wikipedia?

However, I would like to do it all offline. That is using dumps, and also for different languages.

Other things which I explored are category table and category link table. MediaWiki_1.28.0_database_schema


Plugin Submissions ETA Reopening Early Next Week

Published 31 Mar 2017 by Ipstenu (Mika Epstein) in Make WordPress Plugins.

really want to say “We’ll reopen on Monday!” but right now we’re aiming for Monday.

What’s going on?

We found some bugs that didn’t happen in testing.

For example, when we did the final import of all the pending plugins, they were in a maybe-wrong state. That meant we had to go through all our emails and logs to make sure we’d emailed everyone about their plugin status or not. That took us until Friday afternoon.

At the same time, we found some process flow bugs that were just going to make things worse all around and had to address those. It doesn’t do you any good to submit a plugin if we can’t review it, or if approvals don’t generate your SVN folder, for example! We had to document all of those to make sure things would get fixed in the right order (some of them we can live with, obviously).

The good news is that we did clean out the queue, so everyone who had a submission pending has now been emailed. Some of you twice. Sorry about that. If you didn’t get one and you think your plugin is pending, email us at plugins@wordpress.org and we can look.

Thank You Systems/Meta

Systems and Meta have been wonderful, plowing through the tickets raised. Right now, we’re prioritizing “Fix what’s broken” so the only tickets you see in the Plugin Directory v 3.0 milestone are items we feel must be fixed as soon as possible. If I’ve moved your ticket out, it’s simply because it’s not deemed mission critical at this moment, and not that it will never be addressed. It’s triage, and we were just as brutal about it on ourselves.

Thank You Too

I really do appreciate everyones patience and understanding.

Obviously things didn’t go perfectly, but considering the magnitude of this change, it’s gone smoother than I predicted (I may owe people dinner now). If you want to help us out, right now please spread the word to your fellow developers. Remember, if you can get everyone to read this blog first before they email/dm/ping for status, you make reviews go faster!

#directory, #repository


"Secularism isn’t about the absence of religion, it’s about the structure of the state"

Published 30 Mar 2017 by in New Humanist Articles and Posts.

Q&A with Yasmin Rehman, veteran campaigner recently named Secularist of the Year.

Tableau Vivant and the Unobserved

Published 30 Mar 2017 by carinamm in State Library of Western Australia Blog.

April 4 Tableau Vivant Image_darkened_2.jpg

Still scene: Tableau Vivant and the Unobserved, 2016, Nicola Kaye, Stephen Terry.

Tableau Vivant and the Unobserved visually questions how history is made, commemorated and forgotten. Through digital art installation, Nicola Kaye and Stephen Terry expose the unobserved and manipulate our perception of the past.  Their work juxtaposes archival and contemporary imagery to create an experience for the visitor where unobserved lives from the archive collide with the contemporary world.

Tableau Vivant and the Unobserved is the culmination of the State Library’s inaugural J.S. Battye Creative Fellowship.  The Creative Fellowship aims to enhance engagement with the Library’s heritage collections and provide new experiences for the public.

Artists floor talk
Thursday 6 April, 6pm
Ground Floor Gallery, State Library of Western Australia.

Nicola Kaye and Stephen Terry walk you through Tableau Vivant and the Unobserved

In conversation with the J.S. Battye Creative Fellows
Thursday 27 April, 6pm
State Library Theatre.

How can contemporary art lead to new discoveries about collections and ways of engaging with history?  Nicola Kaye and Stephen Terry will discuss this idea drawing from the experience of creating Tableau Vivant and the Unobserved.

Tableau Vivant and the Unobserved is showing at the State Library from 4 April – 12 May 2017.
For more information visit: www.slwa.wa.gov.au


Filed under: community events, Exhibitions, SLWA collections, SLWA displays, SLWA events, SLWA Exhibitions, SLWA news, State Library of Western Australia, talks, WA history, Western Australia Tagged: exhibitions, installation art, J.S. Battye Creative Fellowship, Nicola Kaye, Perth, Perth Cultural Centre, State Library of Western Australai, Stephen Terry, Tableau Vivant and the Unobserved

Faster Files Forever

Published 29 Mar 2017 by Nicola Nye in FastMail Blog.

You know FastMail provides the best in email but did you also know as part of your account you have file storage? All of our plans includes bonus storage for files in addition to your email account.

Today we are proud to reveal our improved file storage screens.

The cluttered, slow static screens have been replaced with a shiny, fast, responsive interface. The new three panel view lets you view and edit file and folder details dynamically, including image previews.

Screenshot of new three panel interface

Easily upload your files and folders with drag and drop on the web interface. (Note: Safari does not support folder drag and drop) Our upload manager allows you to track your upload progress and cancel individual uploads, or even the whole batch at once.

Upload manager shows progress of file uploads

Files support works just as well on our mobile apps as it does on the web interface (whether you're on mobile or desktop). You can even manage your files via FTP or WebDAV.

No screen refreshes required when files are uploaded from mobile or shared in a multi-user account: see them instantly on all FastMail clients.

Easily locate files and folders with our powerful search tool.

Once your files are uploaded, you can quickly attach them to emails.

Select file from storage to attach to email

You can even host simple websites and photo galleries from your file storage, and share files with other users in your account.

File quota limits apply, but they are independent of your mail quota: filling up your file storage won't affect delivery of your email!

Read the online help for full details of the improved Files feature.


Remembering Another China in Kunming

Published 29 Mar 2017 by Tom Wilson in tom m wilson.

Last weekend I headed out for a rock climbing session with some locals and expats.  First I had to cross town, and while doing so I came across an old man doing water calligraphy by Green Lake.  I love the transience of this art: the beginning of the poem is starting to fade by the time he reaches […]

Book review: Charges - The Supplicants

Published 29 Mar 2017 by in New Humanist Articles and Posts.

Elfriede Jelinek's play is a blistering indictment of European asylum policy, and the indifference or hostility of some Europeans.

The New Directory Is (Mostly) Live

Published 28 Mar 2017 by Ipstenu (Mika Epstein) in Make WordPress Plugins.

Sorry about the post-facto notice, there were a lot of moving parts and we got some things out of order when communicating between the multiple teams.

Current Status

Known Issues

Beside everything listed on Meta Trac, we are aware of the following issues:

Plugin Submissions are Currently Closed

THERE ARE NO PLUGIN APPROVALS GOING ON AT THIS TIME

You can’t submit a new one for approval, and we won’t be approving anything until possibly tomorrow at the earliest.

I will post here (make/plugins) as soon as we reopen and start things moving along, but please don’t ask for a status update. If it’s not posted here, we don’t have one, and you’ll just make everything take longer.

Plugins Will No Longer Be Rejected After Seven Days

Before you panic, we’re not going to reject plugins after 7 days anymore. The queue will be handled differently so having old plugins with no replies is less of a problem. Also? We’ll be able to rename your plugin slug before approval, so that will take care of most things like `google-analytics-by-faro` 😁 and other obvious typos.

However. This means the onus is now even more on you to make sure you whitelist emails from `wordpress.org` in your email servers. A high volume of people never see the first email (the ‘please fix’) and only see the followup of 7-days, so now you won’t be getting that anymore.

#announcement


Week #11: Raided yet again

Published 27 Mar 2017 by legoktm in The Lego Mirror.

If you missed the news, the Raiders are moving to Las Vegas. The Black Hole is leaving Oakland (again) for a newer, nicer, stadium in the desert. But let's talk about how we got here, and how different this is from the moving of the San Diego Chargers to Los Angeles.

The current Raiders stadium is outdated and old. It needs renovating to keep up with other modern stadiums in the NFL. Owner Mark Davis isn't a multi-billionaire that could finance such a stadium. And the City of Oakland is definitely not paying for it. So the options left are find outside financing for Oakland, for find said financing somewhere else. And unfortunately it was the latter option that won out in the end.

I think it's unsurprising that more and more cities are refusing to put public money into stadiums that they will see no profit from - it makes no sense whatsoever.

Overall I think the Raider Nation will adapt and survive just as it did when they moved to Los Angeles. The Raiders still have an awkward two-to-three years left in Oakland, and with Derek Carr at the helm, it looks like they will be good ones.


Wikimedia genealogy project

Published 25 Mar 2017 by Sam Wilson in Sam's notebook.

The Wikimedia genealogy project now has a mailing list.


Kubuntu 17.04 Beta 2 released for testers

Published 23 Mar 2017 by valorie-zimmerman in Kubuntu.

Today the Kubuntu team is happy to announce that Kubuntu Zesty Zapus (17.04) Beta 2 is released . With this Beta 2 pre-release, you can see and test what we are preparing for 17.04, which we will be releasing April 13, 2017.

Kubuntu 17.04 Beta 2

 

NOTE: This is Beta 2 Release. Kubuntu Beta Releases are NOT recommended for:

* Regular users who are not aware of pre-release issues
* Anyone who needs a stable system
* Anyone uncomfortable running a possibly frequently broken system
* Anyone in a production environment with data or work-flows that need to be reliable

Getting Kubuntu 17.04 Beta 2:
* Upgrade from 16.10: run `do-release-upgrade -d` from a command line.
* Download a bootable image (ISO) and put it onto a DVD or USB Drive : http://cdimage.ubuntu.com/kubuntu/releases/zesty/beta-2/

Release notes: https://wiki.ubuntu.com/ZestyZapus/Beta2/Kubuntu


Week #10: March Sadness

Published 23 Mar 2017 by legoktm in The Lego Mirror.

In California March Madness is really...March Sadness. The only Californian team that is still in is UCLA. UC Davis made it in but was quickly eliminated. USC and Saint Mary's both fell in the second round. Cal and Stanford didn't even make it in. At best we can root for Gonzaga, but that's barely it.

Some of us root for school's we went to, but for those of us who grew up here and support local teams, we're left hanging. And it's not bias in the selection commitee, those schools just aren't good enough.

On top of that we have a top notch professional team through the Warriors, but our amateur players just aren't up to muster.

So good luck to UCLA, represent California hella well. We somewhat believe in you.


Week #9: The jersey returns

Published 23 Mar 2017 by legoktm in The Lego Mirror.

And so it has been found. Tom Brady's jersey was in Mexico the whole time, stolen by a member of the press. And while it's great news for Brady, sports memorabilia fans, and the FBI, it doesn't look good for journalists. Journalists are given a lot of access to players, allowing them to obtain better content and get better interviews. It would not be surprising if the NFL responds to this incident by locking down the access that journalists are given. And that would be real bummer.

I'm hoping this is seen as an isolated incident and all journalists are not punished for the offenses by one.


Piwigo.com Enterprise plans, now official!

Published 23 Mar 2017 by Pierrick Le Gall in The Piwigo.com Blog.

In the shadow of the standard plan for several years and yet already adopted by more than 50 organizations, it is time to officially introduce the Piwigo.com Enterprise plans. They were designed for organizations, private or public, looking for a simple, affordable and yet complete tool to manage their collection of photos.

The main idea behind Piwigo.com Enterprise is to democratize photo library management for organizations of all kind and size. We are not targeting fortune 500, although some of them are already clients, but fortune 5,000,000 companies!

Piwigo.com Enterprise plans can replace, at a reasonable cost, inadequate solutions relying on intranet shared folders, where photos are sometimes duplicated, deleted by mistake, without the appropriate permission system.

Introduction to Piwigo.com Enterprise plans

Introduction to Piwigo.com Enterprise plans

Why announcing officially these plans today? Because the current trend obviously shows us that our Enterprise plans find its market. Although semi-official, Enterprise plans represented nearly 40% of our revenue in February 2017! It is time to put these plans under the spotlights.

In practice, here is what changes with the Piwigo.com Enterprise plans:

  1. they can be used by organizations, as opposed to the standard plan
  2. additional features, such as support for non-photo files (PDF, videos …)
  3. higher level of service (priority support, customization, presentation session)

Discover Piwigo.com Entreprise


v2.3.0

Published 22 Mar 2017 by fabpot in Tags from Twig.


v1.33.0

Published 22 Mar 2017 by fabpot in Tags from Twig.


Please Help Us Track Down Apple II Collections

Published 20 Mar 2017 by Jason Scott in ASCII by Jason Scott.

Please spread this as far as possible – I want to reach folks who are far outside the usual channels.

The Summary: Conditions are very, very good right now for easy, top-quality, final ingestion of original commercial Apple II Software and if you know people sitting on a pile of it or even if you have a small handful of boxes, please get in touch with me to arrange the disks to be imaged. apple@textfiles.com. 

The rest of this entry says this in much longer, hopefully compelling fashion.

We are in a golden age for Apple II history capture.

For now, and it won’t last (because nothing lasts), an incredible amount of interest and effort and tools are all focused on acquiring Apple II software, especially educational and engineering software, and ensuring it lasts another generation and beyond.

I’d like to take advantage of that, and I’d like your help.

Here’s the secret about Apple II software: Copy Protection Works.

Copy protection, that method of messing up easy copying from floppy disks, turns out to have been very effective at doing what it is meant to do – slow down the duplication of materials so a few sales can eke by. For anything but the most compelling, most universally interesting software, copy protection did a very good job of ensuring that only the approved disks that went out the door are the remaining extant copies for a vast majority of titles.

As programmers and publishers laid logic bombs and coding traps and took the brilliance of watchmakers and used it to design alternative operating systems, they did so to ensure people wouldn’t take the time to actually make the effort to capture every single bit off the drive and do the intense and exacting work to make it easy to spread in a reproducible fashion.

They were right.

So, obviously it wasn’t 100% effective at stopping people from making copies of programs, or so many people who used the Apple II wouldn’t remember the games they played at school or at user-groups or downloaded from AE Lines and BBSes, with pirate group greetings and modified graphics.

What happened is that pirates and crackers did what was needed to break enough of the protection on high-demand programs (games, productivity) to make them work. They used special hardware modifications to “snapshot” memory and pull out a program. They traced the booting of the program by stepping through its code and then snipped out the clever tripwires that freaked out if something wasn’t right. They tied it up into a bow so that instead of a horrendous 140 kilobyte floppy, you could have a small 15 or 20 kilobyte program instead. They even put multiple cracked programs together on one disk so you could get a bunch of cool programs at once.

I have an entire section of TEXTFILES.COM dedicated to this art and craft.

And one could definitely argue that the programs (at least the popular ones) were “saved”. They persisted, they spread, they still exist in various forms.

And oh, the crack screens!

I love the crack screens, and put up a massive pile of them here. Let’s be clear about that – they’re a wonderful, special thing and the amount of love and effort that went into them (especially on the Commodore 64 platform) drove an art form (demoscene) that I really love and which still thrives to this day.

But these aren’t the original programs and disks, and in some cases, not the originals by a long shot. What people remember booting in the 1980s were often distant cousins to the floppies that were distributed inside the boxes, with the custom labels and the nice manuals.

.

On the left is the title screen for Sabotage. It’s a little clunky and weird, but it’s also something almost nobody who played Sabotage back in the day ever saw; they only saw the instructions screen on the right. The reason for this is that there were two files on the disk, one for starting the title screen and then the game, and the other was the game. Whoever cracked it long ago only did the game file, leaving the rest as one might leave the shell of a nut.

I don’t think it’s terrible these exist! They’re art and history in their own right.

However… the mistake, which I completely understand making, is to see programs and versions of old Apple II software up on the Archive and say “It’s handled, we’re done here.” You might be someone with a small stack of Apple II software, newly acquired or decades old, and think you don’t have anything to contribute.

That’d be a huge error.

It’s a bad assumption because there’s a chance the original versions of these programs, unseen since they were sold, is sitting in your hands. It’s a version different than the one everyone thinks is “the” version. It’s precious, it’s rare, and it’s facing the darkness.

There is incredibly good news, however.

I’ve mentioned some of these folks before, but there is now a powerful allegiance of very talented developers and enthusiasts who have been pouring an enormous amount of skills into the preservation of Apple II software. You can debate if this is the best use of their (considerable) skills, but here we are.

They have been acquiring original commercial Apple II software from a variety of sources, including auctions, private collectors, and luck. They’ve been duplicating the originals on a bits level, then going in and “silent cracking” the software so that it can be played on an emulator or via the web emulation system I’ve been so hot on, and not have any change in operation, except for not failing due to copy protection.

With a “silent crack”, you don’t take the credit, you don’t make it about yourself – you just make it work, and work entirely like it did, without yanking out pieces of the code and program to make it smaller for transfer or to get rid of a section you don’t understand.

Most prominent of these is 4AM, who I have written about before. But there are others, and they’re all working together at the moment.

These folks, these modern engineering-minded crackers, are really good. Really, really good.

They’ve been developing tools from the ground up that are focused on silent cracks, of optimizing the process, of allowing dozens, sometimes hundreds of floppies to be evaluated automatically and reducing the workload. And they’re fast about it, especially when dealing with a particularly tough problem.

Take, for example, the efforts required to crack Pinball Construction Set, and marvel not just that it was done, but that a generous and open-minded article was written explaining exactly what was being done to achieve this.

This group can be handed a stack of floppies, image them, evaluate them, and find which have not yet been preserved in this fashion.

But there’s only one problem: They are starting to run out of floppies.

I should be clear that there’s plenty left in the current stack – hundreds of floppies are being processed. But I also have seen the effort chug along and we’ve been going through direct piles, then piles of friends, and then piles of friends of friends. We’ve had a few folks from outside the community bring stuff in, but those are way more scarce than they should be.

I’m working with a theory, you see.

My theory is that there are large collections of Apple II software out there. Maybe someone’s dad had a store long ago. Maybe someone took in boxes of programs over the years and they’re in the basement or attic. I think these folks are living outside the realm of the “Apple II Community” that currently exists (and which is a wonderful set of people, be clear). I’m talking about the difference between a fan club for surfboards and someone who has a massive set of surfboards because his dad used to run a shop and they’re all out in the barn.

A lot of what I do is put groups of people together and then step back to let the magic happen. This is a case where this amazingly talented group of people are currently a well-oiled machine – they help each other out, they are innovating along this line, and Apple II software is being captured in a world-class fashion, with no filtering being done because it’s some hot ware that everyone wants to play.

For example, piles and piles of educational software has returned from potential oblivion, because it’s about the preservation, not the title. Wonderfully done works are being brought back to life and are playable on the Internet Archive.

So like I said above, the message is this:

Conditions are very, very good right now for easy, top-quality, final ingestion of original commercial Apple II Software and if you know people sitting on a pile of it or even if you have a small handful of boxes, please get in touch with me to arrange the disks to be imaged. apple@textfiles.com.

I’ll go on podcasts or do interviews, or chat with folks on the phone, or trade lots of e-mails discussing details. This is a very special time, and I feel the moment to act is now. Alliances and communities like these do not last forever, and we’re in a peak moment of talent and technical landscape to really make a dent in what are likely acres of unpreserved titles.

It’s 4am and nearly morning for Apple II software.

It’d be nice to get it all before we wake up.

 


Nature in China

Published 20 Mar 2017 by Tom Wilson in tom m wilson.

The sun sets in south-east Yunnan province, over karst mountains and lakes, not far from the border with Vietnam. Last weekend I went to Puzheihei, an area of karst mountains surrounded by water lilly-filled lakes 270kms south-east of Kunming. What used to be a five hour bus journey now just takes 1.5 hours on the […]

Managing images on an open wiki platform

Published 19 Mar 2017 by Oliver K in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I'm developing a wiki page using MediaWiki and there are a few ways of inplementing images into wiki pages such as uploading them on the website and uploading them on external websites it potentially banning and requesting others to place an image.

Surely images may be difficult to manage as one day someone may upload a vulgar image and many people will then see it. How can I ensure vulgar images do not get through and that administrators aren't scarred for life after monitoring them?


Returning (again) to WordPress

Published 19 Mar 2017 by Sam Wilson in Sam's notebook.

Every few years I try to move my blog away from WordPress. I tried again earlier this year, but here I am back in WordPress before even a month has gone by! Basically, nothing is as conducive to writing for the web.

I love MediaWiki (which is what I shifted to this time; last time around it was Dokuwiki and for a brief period last year it was a wrapper for Pandoc that I’m calling markdownsite; there have been other systems too) but wikis really are general-purpose co-writing platforms, best for multiple users working on text that needs to be revised forever. Not random mutterings of that no one will ever read, let alone particularly need to edit on an on-going basis.

So WordPress it is, and it’s leading me to consider the various ‘streams’ of words that I use daily: email, photography, journal, calendar, and blog (I’ll not get into the horrendous topic of chat platforms). In the context of those streams, WordPress excels. So I’ll try it again, I think.


Does the composer software have a command like python -m compileall ./

Published 18 Mar 2017 by jehovahsays in Newest questions tagged mediawiki - Server Fault.

I want to use composer for a mediawiki root folder with multiple directories
that need composer to install their dependencies
with a command like composer -m installall ./
For example , if the root folder was all written in python
i could use the command python -m compileall ./


Kubuntu has a new member: Darin Miller

Published 18 Mar 2017 by clivejo in Kubuntu.

Today at 15:58 UTC the Kubuntu Council approved Darin Miller’s application for becoming a Kubuntu Member.

Darin has been coming to the development channel and taking part in the informal developer meetings on Big Blue Button for a while now, helping out were he can with the packaging and continuous integration.  His efforts have already made a huge difference.

Here’s a snippet of his interview:

<DarinMiller> I have contributed very little independently, but I have helped fix lintian issues, control files deps, and made a very minor mod to one of the KA scripts.
<clivejo> minor mod?
<acheronuk> very useful mod IIR ^^^
<clivejo> I think you are selling yourself short there!
-*- clivejo was very frustrated with the tooling prior to that fix
<DarinMiller> From coding perspective, it was well within my skillset, so the mod seemed minor to me.
<clivejo> well it was much appreciated
<yofel> when did you start hanging out here and how did you end up in this channel?
<DarinMiller> That’s another reason I like this team. I feel my efforts are appreciated.
<DarinMiller> And that encourages me to want to do more.

He is obviously a very modest chap and the Kubuntu team would like to offer him a very warm welcome, as well as greeting him with our hugs and the list of jobs / work to be done!

For those interested here’s Darin’s wiki page: https://wiki.kubuntu.org/~darinmiller and his Launchpad page: https://launchpad.net/~darinmiller

The meeting log is available here.


Hilton Harvest Earth Hour Picnic and Concert

Published 18 Mar 2017 by Dave Robertson in Dave Robertson.

Share


Clarification of Guideline 8 – Executable Code and Installs

Published 16 Mar 2017 by Ipstenu (Mika Epstein) in Make WordPress Plugins.

Since Jetpack announced it installs themes, a number of people have asked if this is a violation of the 8th guideline:

The plugin may not send executable code via third-party systems.

And in specific, these two items:

  • Serving updates or otherwise installing plugins, themes, or add-ons from servers other than WordPress.org’s
  • Installing premium versions of the same plugin

The short answer is no, it’s not, and yes, you could do this too.

The longer answer involves understanding the intent of this guideline, which initially was to prevent nefarious developers from using your installs as a botnet. In addition, it’s used to disallow plugins that exist only to install products from other locations, without actually being of use themselves (ie. marketplace only plugins). Finally it’s there to prevent someone silly from making a plugin that installs a collection of ‘cool plugins’ they found from GitHub and want to make easier to install. Which actually did happen once.

Plugins are expected to do ‘something’ to your site. A plugin that exists only to check a license and install a product, while incredibly useful, is not something we currently allow as a standalone product. This is why we allow plugins to have the in-situ code for updates that is used by their add-on plugins. The plugin we host has to use WordPress.org to update itself.

In addition, we do permit valid services to perform actions of installations onto sites, and have for a very long time. ManageWP, for example, has had this ability for quite a while. It provides a valid service, letting you manage multiple sites from their dashboard, and yes, install and update plugins. Going back to the example of a plugin that hosts the update code for it’s addons, the ‘service’ is the license you bought for the add-on plugin.

The trick here, and this is what is about to sound like hair splitting, is that it’s not the plugin UI on your site that does the install. In order for Manage WP and Jetpack to work, you have to go to your panel on their sites and install the items. If you wanted to make, say, my.servicename.com and let people log in, authenticate their sites, and from that interface use a JSON API to trigger an install, you absolutely, 100%, totally can.

To hit the major talking points:

I know this is frustrating to a lot of people. The reason it never came up before is no one asked us, and it isn’t our place to run your business or invent all the cool things. The guidelines are guidelines, and not laws or rules, to allow people to interpret them, and you’re always welcome to ask us if something’s okay or not. Or warn us if you’re about to do something you think might get the masses up in a dander.

#guidelines


Sandpapering Screenshots

Published 15 Mar 2017 by Jason Scott in ASCII by Jason Scott.

The collection I talked about yesterday was subjected to the Screen Shotgun, which does a really good job of playing the items, capturing screenshots, and uploading them into the item to allow people to easily see, visually, what they’re in for if they boot them up.

In general, the screen shotgun does the job well, but not perfectly. It doesn’t understand what it’s looking at, at all, and the method I use to decide the “canonical” screenshot is inherently shallow – I choose the largest filesize, because that tends to be the most “interesting”.

The bug in this is that if you have, say, these three screenshots:

…it’s going to choose the first one, because those middle-of-loading graphics for an animated title screen have tons of little artifacts, and the filesize is bigger. Additionally, the second is fine, but it’s not the “title”, the recognized “welcome to this program” image. So the best choice turns out to be the third.

I don’t know why I’d not done this sooner, but while waiting for 500 disks to screenshot, I finally wrote a program to show me all the screenshots taken for an item, and declare a replacement canonical title screenshot. The results have been way too much fun.

It turns out, doing this for Apple II programs in particular, where it’s removed the duplicates and is just showing you a gallery, is beautiful:

Again, the all-text “loading screen” in the middle, which is caused by blowing program data into screen memory, wins the “largest file” contest, but literally any other of the screens would be more appropriate.

This is happening all over the place: crack screens win over the actual main screen, the mid-loading noise of Apple II programs win over the final clean image, and so on.

Working with tens of thousands of software programs, primarily alone, means that I’m trying to find automation wherever I can. I can’t personally boot up each program and do the work needed to screenshot/describe it – if a machine can do anything, I’ll make the machine do it. People will come to me with fixes or changes if the results are particularly ugly, but it does leave a small amount that no amount of automation is likely to catch.

If you watch a show or documentary on factory setups and assembly lines, you’ll notice they can’t quite get rid of people along the entire line, especially the sign-off. Someone has to keep an eye to make sure it’s not going all wrong, or, even more interestingly, a table will come off the line and you see one person giving it a quick run-over with sandpaper, just to pare down the imperfections or missed spots of the machine. You still did an enormous amount of work with no human effort, but if you think that’s ready for the world with no final sign-off, you’re kidding yourself.

So while it does mean another hour or two looking at a few hundred screenshots, it’s nice to know I haven’t completely automated away the pleasure of seeing some vintage computer art, for my work, and for the joy of it.


More Ways to Work with Load Balancers

Published 15 Mar 2017 by DigitalOcean in DigitalOcean Blog.

When building new products at DigitalOcean, one of our goals is to ensure that they're simple to use and developer friendly. And that goes beyond the control panel; we aim to provide intuitive APIs and tools for each of our products. Since the release of Load Balancers last month, we've worked to incorporate them into our API client libraries and command line client. We've also seen community-supported open source projects extended to support Load Balancers.

Today, we want to share several new ways you can interact with Load Balancers.

Command Line: doctl

doctl is our easy-to-use, official command line client. Load Balancer support landed in version v1.6.0. You can download the release from GitHub or install it using Homebrew on Mac:

brew install doctl

You can use doctl for anything you can do in our control panel. For example, here's how you would create a Load Balancer:

doctl compute load-balancer create --name "example-01" \
    --region "nyc3" --tag-name "web:prod" \
    --algorithm "round_robin" \
    --forwarding-rules \
    "entry_protocol:http,entry_port:80,target_protocol:http,target_port:80"

Find doctl's full documentation in this DigitalOcean tutorial.

Go: godo

We're big fans of Go, and godo is the way to interact with DigitalOcean using Go. Load Balancer support is included in the recently tagged v1.0.0 release. Here's an example:

createRequest := &godo.LoadBalancerRequest{
    Name:      "example-01",
    Algorithm: "round_robin",
    Region:    "nyc3",
    ForwardingRules: []godo.ForwardingRule{
        {
            EntryProtocol:  "http",
            EntryPort:      80,
            TargetProtocol: "http",
            TargetPort:     80,
        },
    },
    HealthCheck: &godo.HealthCheck{
        Protocol:               "http",
        Port:                   80,
        Path:                   "/",
        CheckIntervalSeconds:   10,
        ResponseTimeoutSeconds: 5,
        HealthyThreshold:       5,
        UnhealthyThreshold:     3,
    },
    StickySessions: &godo.StickySessions{
        Type: "none",
    },
    Tag:                 "web:prod",
    RedirectHttpToHttps: false,
}

lb, _, err := client.LoadBalancers.Create(ctx, createRequest)

The library's full documentation is available on GoDoc.

Ruby: droplet_kit

droplet_kit is our Ruby API client library. Version 2.1.0 has Load Balancer support and is now available on Rubygems. You can install it with this command:

gem install droplet_kit

And you can create a new Load Balancer like so:

load_balancer = DropletKit::LoadBalancer.new(
  name: 'example-lb-001',
  algorithm: 'round_robin',
  tag: 'web:prod',
  redirect_http_to_https: true,
  region: 'nyc3',
  forwarding_rules: [
    DropletKit::ForwardingRule.new(
      entry_protocol: 'http',
      entry_port: 80,
      target_protocol: 'http',
      target_port: 80,
      certificate_id: '',
      tls_passthrough: false
    )
  ],
  sticky_sessions: DropletKit::StickySession.new(
    type: 'none',
    cookie_name: '',
    cookie_ttl_seconds: nil
  ),
  health_check: DropletKit::HealthCheck.new(
    protocol: 'http',
    port: 80,
    path: '/',
    check_interval_seconds: 10,
    response_timeout_seconds: 5,
    healthy_threshold: 5,
    unhealthy_threshold: 3
  )
)

client.load_balancers.create(load_balancer)

Community Supported

Besides our official open source projects, there are two community contributions we'd like to highlight:

Thanks to our colleagues Viola and Andrew for working on these features, and the open source community for including Load Balancer support in their projects. In particular, we want to give a special shout out to Paul Stack and the rest of our friends at HashiCorp who added support to Terraform so quickly. You rock!

We're excited to see more tools add Load Balancer support. If you're the maintainer of a project that has added support, Tweet us @digitalocean. We can help spread the word!

Rafael Rosa
Product Manager, High Availability


Thoughts on a Collection: Apple II Floppies in the Realm of the Now

Published 15 Mar 2017 by Jason Scott in ASCII by Jason Scott.

I was connected with The 3D0G Knight, a long-retired Apple II pirate/collector who had built up a set of hundreds of floppy disks acquired from many different locations and friends decades ago. He generously sent me his entire collection to ingest into a more modern digital format, as well as the Internet Archive’s software archive.

The floppies came in a box without any sort of sleeves for them, with what turned out to be roughly 350 of them removed from “ammo boxes” by 3D0G from his parents’ house. The disks all had labels of some sort, and a printed index came along with it all, mapped to the unique disk ID/Numbers that had been carefully put on all of them years ago. I expect this was months of work at the time.

Each floppy is 140k of data on each side, and in this case, all the floppies had been single-sided and clipped with an additional notch with a hole punch to allow the second side to be used as well.

Even though they’re packed a little strangely, there was no damage anywhere, nothing bent or broken or ripped, and all the items were intact. It looked to be quite the bonanza of potentially new vintage software.

So, this activity at the crux of the work going on with both the older software on the Internet Archive, as well as what I’m doing with web browser emulation and increasing easy access to the works of old. The most important thing, over everything else, is to close the air gap – get the data off these disappearing floppy disks and into something online where people or scripts can benefit from them and research them. Almost everything else – scanning of cover art, ingestion of metadata, pulling together the history of a company or cross-checking what titles had which collaborators… that has nowhere near the expiration date of the magnetized coated plastic disks going under. This needs us and it needs us now.

The way that things currently work with Apple II floppies is to separate them into two classes: Disks that Just Copy, and Disks That Need A Little Love. The Little Love disks, when found, are packed up and sent off to one of my collaborators, 4AM, who has the tools and the skills to get data of particularly tenacious floppies, as well as doing “silent cracks” of commercial floppies to preserve what’s on them as best as possible.

Doing the “Disks that Just Copy” is a mite easier. I currently have an Apple II system on my desk that connects via USB-to-serial connection to my PC. There, I run a program called Apple Disk Transfer that basically turns the Apple into a Floppy Reading Machine, with pretty interface and everything.

Apple Disk Transfer (ADT) has been around a very long time and knows what it’s doing – a floppy disk with no trickery on the encoding side can be ripped out and transferred to a “.DSK” file on the PC in about 20 seconds. If there’s something wrong with the disk in terms of being an easy read, ADT is very loud about it. I can do other things while reading floppies, and I end up with a whole pile of filenames when it’s done. The workflow, in other words, isn’t so bad as long as the floppies aren’t in really bad shape. In this particular set, the floppies were in excellent shape, except when they weren’t, and the vast majority fell into the “excellent” camp.

The floppy drive that sits at the middle of this looks like some sort of nightmare, but it helps to understand that with Apple II floppy drives, you really have to have the cover removed at all time, because you will be constantly checking the read head for dust, smudges, and so on. Unscrewing the whole mess and putting it back together for looks just doesn’t scale. It’s ugly, but it works.

It took me about three days (while doing lots of other stuff) but in the end I had 714 .dsk images pulled from both sides of the floppies, which works out to 357 floppy disks successfully imaged. Another 20 or so are going to get a once over but probably are going to go into 4am’s hands to get final evaluation. (Some of them may in fact be blank, but were labelled in preparation, and so on.) 714 is a lot to get from one person!

As mentioned, an Apple II 5.25″ floppy disk image is pretty much always 140k. The names of the floppy are mine, taken off the label, or added based on glancing inside the disk image after it’s done. For a quick glance, I use either an Apple II emulator called Applewin, or the fantastically useful Apple II disk image investigator Ciderpress, which is a frankly the gold standard for what should be out there for every vintage disk/cartridge/cassette image. As might be expected, labels don’t always match contents. C’est la vie.

As for the contents of the disks themselves; this comes down to what the “standard collection” was for an Apple II user in the 1980s who wasn’t afraid to let their software library grow utilizing less than legitimate circumstances. Instead of an elegant case of shiny, professionally labelled floppy diskettes, we get a scribbled, messy, organic collection of all range of “warez” with no real theme. There’s games, of course, but there’s also productivity, utilities, artwork, and one-off collections of textfiles and documentation. Games that were “cracked” down into single-file payloads find themselves with 4-5 other unexpected housemates and sitting behind a menu. A person spending the equivalent of $50-$70 per title might be expected to have a relatively small and distinct library, but someone who is meeting up with friends or associates and duplicating floppies over a few hours will just grab bushels of strange.

The result of the first run is already up on the Archive: A 37 Megabyte .ZIP file containing all the images I pulled off the floppies. 

In terms of what will be of relevance to later historians, researchers, or collectors, that zip file is probably the best way to go – it’s not munged up with the needs of the Archive’s structure, and is just the disk images and nothing else.

This single .zip archive might be sufficient for a lot of sites (go git ‘er!) but as mentioned infinite times before, there is a very strong ethic across the Internet Archive’s software collection to make things as accessible as possible, and hence there are over nearly 500 items in the “3D0G Knight Collection” besides the “download it all” item.

The rest of this entry talks about why it’s 500 and not 714, and how it is put together, and the rest of my thoughts on this whole endeavor. If you just want to play some games online or pull a 37mb file and run, cackling happily, into the night, so be it.

The relatively small number of people who have exceedingly hard opinions on how things “should be done” in the vintage computing space will also want to join the folks who are pulling the 37mb file. Everything else done by me after the generation of the .zip file is in service of the present and near future. The items that number in the hundreds on the Archive that contain one floppy disk image and interaction with it are meant for people to find now. I want someone to have a vague memory of a game or program once interacted with, and if possible, to find it on the Archive. I also like people browsing around randomly until something catches their eye and to be able to leap into the program immediately.

To those ends, and as an exercise, I’ve acquired or collaborated on scripts to do the lion’s share of analysis on software images to prep them for this living museum. These scripts get it “mostly” right, and the rough edges they bring in from running are easily smoothed over by a microscopic amount of post-processing manual attention, like running a piece of sandpaper over a machine-made joint.

Again, we started out 714 disk images. The first thing done was to run them against a script that has hash checksums for every exposed Apple II disk image on the Archive, which now number over 10,000. Doing this dropped the “uniquely new” disk images from 714 to 667.

Next, I concatenated disk images that are part of the same product into one item: if a paint program has two floppy disk images for each of the sides of its disk, those become a single item. In one or two cases, the program spans multiple floppies, so 4-8 (and in one case, 14!) floppy images become a single item. Doing this dropped the total from 667 to 495 unique items. That’s why the number is significantly smaller than the original total.

Let’s talk for a moment about this.

Using hashes and comparing them is the roughest of rough approaches to de-duplicating software items. I do it with Apple II images because they tend to be self contained (a single .dsk file) and because Apple II software has a lot of people involved in it. I’m not alone by any means in acquiring these materials and I’m certainly not alone in terms of work being done to track down all the unique variations and most obscure and nearly lost packages written for this platform. If I was the only person in the world (or one of a tiny sliver) working on this I might be super careful with each and every item to catalog it – but I’m absolutely not; I count at least a half-dozen operations involving in Apple II floppy image ingestion.

And as a bonus, it’s a really nice platform. When someone puts their heart into an Apple II program, it rewards them and the end user as well – the graphics can be charming, the program flow intuitive, and the whole package just gleams on the screen. It’s rewarding to work with this corpus, so I’m using it as a test bed for all these methods, including using hashes.

But hash checksums are seriously not the be-all for this work. Anything can make a hash different – an added file, a modified bit, or a compilation of already-on-the-archive-in-a-hundred-places files that just happen to be grouped up slightly different than others. That said, it’s not overwhelming – you can read about what’s on a floppy and decide what you want pretty quickly; gigabytes will not be lost and the work to track down every single unique file has potential but isn’t necessary yet.

(For the people who care, the Internet Archive generates three different hashes (md5, crc32, sha1) and lists the size of the file – looking across all of those for comparison is pretty good for ensuring you probably have something new and unique.)

Once the items are up there, the Screen Shotgun whips into action. It plays the programs in the emulator, takes screenshots, leafs off the unique ones, and then assembles it all into a nice package. Again, not perfect but left alone, it does the work with no human intervention and gets things generally right. If you see a screenshot in this collection, a robot did it and I had nothing to do with it.

This leads, of course, to scaring out which programs are a tad not-bootable, and by that I mean that they boot up in the emulator and the emulator sees them and all, but the result is not that satisfying:

On a pure accuracy level, this is doing exactly what it’s supposed to – the disk wasn’t ever a properly packaged, self-contained item, and it needs a boot disk to go in the machine first before you swap the floppy. I intend to work with volunteers to help with this problem, but here is where it stands.

The solution in the meantime is a java program modified by Kevin Savetz, which analyzes the floppy disk image and prints all the disk information it can find, including the contents of BASIC programs and textfiles. Here’s a non-booting disk where this worked out. The result is that this all gets ingested into the search engine of the Archive, and so if you’re looking for a file within the disk images, there’s a chance you’ll be able to find it.

Once the robots have their way with all the items, I can go in and fix a few things, like screenshots that went south, or descriptions and titles that don’t reflect what actually boots up. The amount of work I, a single person, have to do is therefore reduced to something manageable.

I think this all works well enough for the contemporary vintage software researcher and end user. Perhaps that opinion is not universal.

What I can say, however, is that the core action here – of taking data away from a transient and at-risk storage medium and putting it into a slightly less transient, less at-risk storage medium – is 99% of the battle. To have the will to do it, to connect with the people who have these items around and to show them it’ll be painless for them, and to just take the time to shove floppies into a drive and read them, hundreds of times… that’s the huge mountain to climb right now. I no longer have particularly deep concerns about technology failing to work with these digital images, once they’re absorbed into the Internet. It’s this current time, out in the cold, unknown and unloved, that they’re the most at risk.

The rest, I’m going to say, is gravy.

I’ll talk more about exactly how tasty and real that gravy is in the future, but for now, please take a pleasant walk in the 3D0G Knight’s Domain.


The Followup

Published 14 Mar 2017 by Jason Scott in ASCII by Jason Scott.

Writing about my heart attack garnered some attention. I figured it was only right to fill in later details and describe what my current future plans are.

After the previous entry, I went back into the emergency room of the hospital I was treated at, twice.

The first time was because I “felt funny”; I just had no grip on “is this the new normal” and so just to understand that, I went back in and got some tests. They did an EKG, a blood test, and let me know all my stats were fine and I was healing according to schedule. That took a lot of stress away.

Two days later, I went in because I was having a marked shortness of breath, where I could not get enough oxygen in and it felt a little like I was drowning. Another round of tests, and one of the cardiologists mentioned a side effect of one of the drugs I was taking was this sort of shortness/drowning. He said it usually went away and the company claimed 5-7% of people got this side effect, but that they observed more like 10-15%. They said I could wait it out or swap drugs. I chose swap. After that, I’ve had no other episodes.

The hospital thought I should stay in Australia for 2 weeks before flying. Thanks to generosity from both MuseumNext and the ACMI, my hosts, that extra AirBnB time was basically paid for. MuseumNext also worked to help move my international flight ahead the weeks needed; a very kind gesture.

Kind gestures abounded, to be clear. My friend Rochelle extended her stay from New Zealand to stay an extra week; Rachel extended hers to match my new departure date. Folks rounded up funds and sent them along, which helped cover some additional costs. Visitors stopped by the AirBnB when I wasn’t really taking any walks outside, to provide additional social contact.

Here is what the blockage looked like, before and after. As I said, roughly a quarter of my heart wasn’t getting any significant blood and somehow I pushed through it for nearly a week. The insertion of a balloon and then a metal stent opened the artery enough for the blood flow to return. Multiple times, people made it very clear that this could have finished me off handily, and mostly luck involving how my body reacted was what kept me going and got me in under the wire.

From the responses to the first entry, it appears that a lot of people didn’t know heart attacks could be a lingering, growing issue and not just a bolt of lightning that strikes in the middle of a show or while walking down the street. If nothing else, I’m glad that it’s caused a number of people to be aware of how symptoms portray each other, as well as getting people to check up cholesterol, which I didn’t see as a huge danger compared to other factors, and which turned out to be significant indeed.

As for drugs, I’ve got a once a day waterfall of pills for blood pressure, cholesterol, heart healing, anti-clotting, and my long-handled annoyances of gout (which I’ve not had for years thanks to the pills). I’m on some of them for the next few months, some for a year, and some forever. I’ve also been informed I’m officially at risk for another heart attack, but the first heart attack was my hint in that regard.

As I healed, and understood better what was happening to me, I got better remarkably quick. There is a single tiny dot on my wrist from the operation, another tiny dot where the IV was in my arm at other times. Rachel gifted a more complicated Fitbit to replace the one I had, with the new one tracking sleep schedule and heart rate, just to keep an eye on it.

A day after landing back in the US, I saw a cardiologist at Mt. Sinai, one of the top doctors, who gave me some initial reactions to my charts and information: I’m very likely going to be fine, maybe even better than before. I need to take care of myself, and I was. If I was smoking or drinking, I’d have to stop, but since I’ve never had alcohol and I’ve never smoked, I’m already ahead of that game. I enjoy walking, a lot. I stay active. And as of getting out of the hospital, I am vegan for at least a year. Caffeine’s gone. Raw vegetables are in.

One might hesitate putting this all online, because the Internet is spectacularly talented at generating hatred and health advice. People want to help – it comes from a good place. But I’ve got a handle on it and I’m progressing well; someone hitting me up with a nanny-finger-wagging paragraph and 45 links to change-your-life-buy-my-book.com isn’t going to help much. But go ahead if you must.

I failed to mention it before, but when this was all going down, my crazy family of the Internet Archive jumped in, everyone from Dad Brewster through to all my brothers and sisters scrambling to find me my insurance info and what they had on their cards, as I couldn’t find mine. It was something really late when I first pinged everyone with “something is not good” and everyone has been rather spectacular over there. Then again, they tend to be spectacular, so I sort of let that slip by. Let me rectify that here.

And now, a little bit on health insurance.

I had travel insurance as part of my health insurance with the Archive. That is still being sorted out, but a large deposit had to be put on the Archive’s corporate card as a down-payment during the sorting out, another fantastic generosity, even if it’s technically a loan. I welcome the coming paperwork and nailing down of financial brass tacks for a specific reason:

I am someone who once walked into an emergency room with no insurance (back in 2010), got a blood medication IV, stayed around a few hours, and went home, generating a $20,000 medical bill in the process. It got knocked down to $9k over time, and I ended up being thrown into a low-income program they had that allowed them to write it off (I think). That bill could have destroyed me, financially. Therefore, I’m super sensitive to the costs of medical care.

In Australia, it is looking like the heart operation and the 3 day hospital stay, along with all the tests and staff and medications, are going to round out around $10,000 before the insurance comes in and knocks that down further (I hope). In the US, I can’t imagine that whole thing being less than $100,000.

The biggest culture shock for me was how little any of the medical staff, be they doctors or nurses or administrators, cared about the money. They didn’t have any real info on what things cost, because pretty much everything is free there. I’ve equating it to asking a restaurant where the best toilets to use a few hours after your meal – they might have some random ideas, but nobody’s really thinking that way. It was a huge factor in my returning to the emergency room so willingly; each visit, all-inclusive, was $250 AUD, which is even less in US dollars. $250 is something I’ll gladly pay for peace of mind, and I did, twice. The difference in the experince is remarkable. I realize this is a hot button issue now, but chalk me up as another person for whom a life-changing experience could come within a remarkably close distance of being an influence on where I might live in the future.

Dr. Sonny Palmer, who did insertion of my stent in the operating room.

I had a pile of plans and things to get done (documentaries, software, cutting down on my possessions, and so on), and I’ll be getting back to them. I don’t really have an urge to maintain some sort of health narrative on here, and I certainly am not in the mood to urge any lifestyle changes or preach a way of life to folks. I’ll answer questions if people have them from here on out, but I’d rather be known for something other than powering through a heart attack, and maybe, with some effort, I can do that.

Thanks again to everyone who has been there for me, online and off, in person and far away, over the past few weeks. I’ll try my best to live up to your hopes about what opportunities my second chance at life will give me.

 


On the Red Mud Trail in Yunnan

Published 13 Mar 2017 by Tom Wilson in tom m wilson.

I finally made it to downtown Kunming last weekend.  Amazingly there were still a few of the old buildings standing in the centre (although they were a tiny minority). Walking across Green Lake, a lake in downtown Kunming with various interconnected islands in its centre, I passed through a grove of bamboo trees. Old women […]

Kubuntu Podcast 21

Published 13 Mar 2017 by ovidiu-florin in Kubuntu.

Show Audio Feeds

MP3: http://feeds.feedburner.com/KubuntuPodcast-mp3

OGG: http://feeds.feedburner.com/KubuntuPodcast-ogg

Pocket Casts links

pc_icon_full OGG

pc_icon_full MP3

Show Hosts

Ovidiu-Florin Bogdan

Rick Timmis

Aaron Honeycutt (Video/Audio Podcast Production)

Intro

What have we (the hosts) been doing ?

Sponsor: Big Blue Button

Big Blue Button logo

Those of you that have attended the Kubuntu parties, will have seen our Big Blue Button conference and online education service.

Video, Audio, Presentation, Screenshare and whiteboard tools.

We are very grateful to Fred Dixon and the team at BigBlueButton.org go check out their project.

Kubuntu News

Elevator Picks

Identify, install and review one app each from the Discover software center and do a short screen demo and review.

In Focus

Sponsor: Linode

Linode-logo

Linode, an awesome VPS with super fast SSD’s, Data connections, and top notch support. We have worked out a sponsorship for a server to build packages quicker and get to our users faster.

Instantly deploy and get a Linode Cloud Server up and running in seconds with your choice of Linux distro, resources, and node location.

BIG SHOUT OUT to Linode for working with us!

Kubuntu Developer Feedback

Sponsor: Bytemark

Bytemark was founded with a simple mission: reliable, UK hosting. Co-founders Matthew Bloch & Peter Taphouse, both engineers by nature built the business from the ground up.

Today, they lead a team of 31 staff who operate Bytemark’s own data centre in York, monitor its 10Gbps national network and deliver 24/7 support to clients of all sizes. Brands hosted on Bytemark’s network include the Royal College of Art, data.gov.uk and DVLA Auctions, and of course Kubuntu.

Drop by their website, and get Started with a free month of cloud hosting!

http://www.bytemark.co.uk/r/kubuntu

Contact Us

How to contact the Kubuntu Team:

How to contact the Kubuntu Podcast Team:


EC Web Accessibility Directive Expert Group (WADEX)

Published 13 Mar 2017 by Shadi Abou-Zahra in W3C Blog.

Meeting room

The European Commission (EC) recently launched the  Web Accessibility Directive Expert Group (WADEX). This group has the mission “to advise the Commission in relation to the preparation of delegated acts, and in the early stages of the preparation of implementing acts” in relation to the EU Directive on the accessibility of the websites and mobile applications of public sector bodies.

More specifically, the focus of this group is to advise the EC on the development of:

This relates closely to the development of the W3C Web Content Accessibility Guidelines (WCAG) 2.1, which is expected to provide improvements for mobile accessibility. It also relates to several other W3C resources on web accessibility, including the Website Accessibility Conformance Evaluation Methodology (WCAG-EM) and its Report Generator, as well as Involving Users in Evaluating Web Accessibility.

I am delighted to have been appointed as an expert to the WADEX sub-group, to represent W3C. With this effort I hope we can further improve the harmonization of web accessibility standards and practices across Europe and internationally, also in line with the EC objectives for a single digital market.


Through the mirror-glass: Capture of artwork framed in glass.

Published 13 Mar 2017 by slwacns in State Library of Western Australia Blog.

 

State Library’s collection material that is selected for digitisation comes to the Digitisation team in a variety of forms. This blog describes capture of artwork that is framed and encased within glass.

So let’s see how the item is digitized.

14

Two large framed original artworks from the picture book Teacup written by Rebecca Young and illustrated by Matt Ottley posed some significant digitisation challenges.

When artwork from the Heritage collection is framed in glass, the glass acts like a mirror and without great care during the capture process, the glass can reflect whatever is in front of it, meaning that the photographer’s reflection (and the reflection of capture equipment) can obscure the artwork.

This post shows how we avoided this issue during the digitisation of two large framed paintings, Cover illustration for Teacup and also page 4-5 [PWC/255/01 ] and The way the whales called out to each other [PWC/255/09].

Though it is sometimes possible to remove the artwork from its housing, there are occasions when this is not suitable. In this example, the decision was made to not remove the artworks from behind glass as the Conservation staff assessed that it would be best if the works were not disturbed from their original housing.

PWC/255/01                                                         PWC/255/09

The most critical issue was to be in control of the light. Rearranging equipment in the workroom allowed for the artwork to face a black wall, a method used by photographers to eliminate reflections.

 

We used black plastic across the entrance of the workroom to eliminate all unwanted light.

6

The next challenge was to set up the camera. For this shoot we used our Hasselblad H3D11 (a 39 mega pixel with excellent colour fidelity).

 

Prior to capture, we gave the glass a good clean with an anti-static cloth. In the images below, you can clearly see the reflection caused by the mirror effect of the glass.

 

Since we don’t have a dedicated photographic studio we needed to be creative when introducing extra light to allow for the capture. Bouncing the light off a large white card prevented direct light from falling on the artwork and reduced a significant number of reflections. We also used a polarizing filter on the camera lens to reduce reflections even further.

11

Once every reflection was eliminated and the camera set square to the artwork, we could test colour balance and exposure.

In the image below, you can see that we made the camera look like ‘Ned Kelly’ to ensure any shiny metal from the camera body didn’t reflect in the glass. We used the camera’s computer controlled remote shutter function to further minimise any reflections in front of the glass.

12

 

The preservation file includes technically accurate colour and greyscale patches to allow for colour fidelity and a ruler for accurate scaling in future reproductions.

13

The preservation file and a cropped version for access were then ingested into the State Library’s digital repository. The repository allows for current access and future reproductions to be made.

From this post you can see the care and attention that goes into preservation digitisation, ‘Do it right, do it once’ is our motto.


Filed under: Children's Literature, Exhibitions, Illustration, Picture Books, SLWA collections, SLWA Exhibitions, State Library of Western Australia, Uncategorized, WA, Western Australia Tagged: digitisation, illustration, slwa, SLWA collections, WA, WA Author

Block Storage Comes to Singapore; Five More Datacenters on the Way!

Published 12 Mar 2017 by DigitalOcean in DigitalOcean Blog.

Today, we're excited to share that Block Storage is available to all Droplets in our Singapore region. With Block Storage, you can scale your storage independently of your compute and have more control over how you grow your infrastructure, enabling you to build and scale larger applications more easily. Block Storage has been a key part of our overall focus on strengthening the foundation of our platform to increase performance and enable our customers to scale.

We've seen incredible engagement since our launch last July. Together, you have created more than 95,000 Block Storage volumes in SFO2, NYC1, and FRA1 to scale databases, take backups, store media, and much more; SGP1 is our fourth datacenter with Block Storage and the first in the Asia-Pacific region.

As we continue to upgrade and augment our other datacenters, we'll be ensuring that Block Storage is added too. In order to help you plan your deployments, we've finalized the timelines for the next five regions. Here is the schedule we're targeting for Block Storage rollout in 2017:

We'll have more specific updates to share on SFO1, NYC2, and AMS2 in a future update.

Inside SGP1, our Singapore Datacenter region

Inside SGP1, our Singapore Datacenter region.

Thanks to everyone who has given us feedback and used Block Storage so far. Please keep it coming. You can try creating your first Block Storage volume in Singapore today!

Ben Schaechter
Product Manager, Droplet & Block Storage


Week #8: Warriors are on the right path

Published 12 Mar 2017 by legoktm in The Lego Mirror.

As you might have guessed due to the lack of previous coverage of the Warriors, I'm not really a basketball fan. But the Warriors are in an interesting place right now. After setting an NBA record for being the fastest team to clinch a playoff spot, Coach Kerr has started resting his starters and the Warriors have a three game losing streak. This puts the Warriors in danger of losing their first seed spot with the San Antonio Spurs only half a game behind them.

But I think the Warriors are doing the right thing. Last year the Warriors set the record for having the best regular season record in NBA history, but also became the first team in NBA history to have a 3-1 advantage in the finals and then lose.

No doubt there was immense pressure on the Warriors last year. It was just expected of them to win the championship, there really wasn't anything else.

So this year they can easily avoid a lot of that pressure by not being the best team in the NBA on paper. They shouldn't worry about being the top seed, just seed in the top four, and play your best in the playoffs. Get some rest, they have a huge advantage over every other team simply by already being in the playoffs with so many games left to play.


28th birthday of the Web

Published 12 Mar 2017 by Jeff Jaffe in W3C Blog.

Today, Sunday 12 March, 2017, the W3C celebrates the 28th birthday of the Web.

We are honored to work with our Director, Sir Tim Berners-Lee, and our members to create standards for the Web for All and the Web on Everything.

Under Tim’s continuing leadership, hundreds of member organizations and thousands of engineers world-wide work on our vital mission – Leading the Web to its Full Potential.

For more information on what Tim views as both challenges and hopes for the future, see: “Three challenges for the web, according to its inventor” at the World Wide Web Foundation.


Kubuntu Podcast #18 – Yakkety Yak

Published 10 Mar 2017 by ovidiu-florin in Kubuntu.

Show Audio Feeds

MP3: http://feeds.feedburner.com/KubuntuPodcast-mp3

OGG: http://feeds.feedburner.com/KubuntuPodcast-ogg

Pocket Casts links

pc_icon_full OGG

pc_icon_full MP3

Show Hosts

Ovidiu-Florin Bogdan

Rick Timmis

Aaron Honeycutt (Video/Audio Podcast Production)

Intro

What have we (the hosts) been doing ?

Sponsor: Big Blue Button

Big Blue Button logo

Those of you that have attended the Kubuntu parties, will have seen our Big Blue Button conference and online education service.

Video, Audio, Presentation, Screenshare and whiteboard tools.

We are very grateful to Fred Dixon and the team at BigBlueButton.org go check out their project.

Kubuntu News

Elevator Picks

Sponsor: Linode

Linode-logo

Linode, an awesome VPS with super fast SSD’s, Data connections, and top notch support. We have worked out a sponsorship for a server to build packages quicker and get to our users faster.

Instantly deploy and get a Linode Cloud Server up and running in seconds with your choice of Linux distro, resources, and node location.

BIG SHOUT OUT to Linode for working with us!

Kubuntu Developer Feedback

Sponsor: Bytemark

Bytemark was founded with a simple mission: reliable, UK hosting. Co-founders Matthew Bloch & Peter Taphouse, both engineers by nature built the business from the ground up.

Today, they lead a team of 31 staff who operate Bytemark’s own data centre in York, monitor its 10Gbps national network and deliver 24/7 support to clients of all sizes. Brands hosted on Bytemark’s network include the Royal College of Art, data.gov.uk and DVLA Auctions, and of course Kubuntu.

Drop by their website, and get Started with a free month of cloud hosting!

Afiliate link: http://www.bytemark.co.uk/r/kubuntu

Listener Feedback

Valorie and the whole Kubuntu team
I used Kubuntu for many years. Since 2000 I use Linux (Mandrake and then Kubuntu)
So I want to thank you and congratulate for your work and your distribution I look forward every 6 months
Good luck and thank you again
Sincerely : TuxMario
=========
TROLL ON TWITTER
=========

Contact Us

How to contact the Kubuntu Team:

How to contact the Kubuntu Podcast Team:


China – Arrival in the Middle Kingdom

Published 9 Mar 2017 by Tom Wilson in tom m wilson.

I’ve arrived in Kunming, the little red dot you can see on the map above.  I’m here to teach research skills to undergraduate students at Yunnan Normal University.  As you can see, I’ve come to a point where the foothills of the Himalayas fold up into a bunch of deep creases.  Yunnan province is the area of […]

WWW2017 and W3Cx Webdev contests at Perth’s Festival of the Web

Published 8 Mar 2017 by Marie-Claire Forgue in W3C Blog.

WWW2017 logoWWW2017 is in less than a month! The 26th edition of the annual World Wide Web Conference will be held in Perth, Australia, from 2 to 7 April 2017.

This year again, W3C proposes a W3C track where conference attendees are invited to learn from, meet and discuss with W3C’s members and team experts. During 2 days, on Wednesday 4 and Thursday 5 April, the current state of the art and future developments in Web Accessibility, Web of Things, Spatial Data on the Web and Web privacy will be presented and demonstrated. Many thanks to our members and the W3C Australia Office for making this happen!

logo of the Festival of the Web - Perth 2017

W3C also participates in the Festival of the Web (FoW). The conference organizers have created a bigger event which includes many different events including Web for All (W4A) (and its accessibility hack), co-organized by our colleague Vivienne Conway (Edith Cowan University). FoW’s numerous activities run from 2 to 9 April 2017 all over the city with the people and for the people, bringing together entrepreneurs, academia, industry, government and the Perth community.

And for the attention of Web developers and designers who love to code and have fun, my colleagues and I have designed not one but three #webdev contests – see below for a short description each:

Look for the contests’ long descriptions, with accompanying tips and resources on the W3Cx’s contests page.

The contests are open to anyone and we’ll accept your projects until Friday 6 April (at 23h59 UTC) (see participation rules). The jury members of the competition are Michel Buffa (W3Cx trainer, University Côte d’Azur), Bert Bos (co-inventor of CSS) and myself.

We will deliberate on Friday 7 April 2017 — on site in Perth. Looking forward to meeting you there!


Introducing Similarity Search at Flickr

Published 7 Mar 2017 by Clayton Mellina in code.flickr.com.

At Flickr, we understand that the value in our image corpus is only unlocked when our members can find photos and photographers that inspire them, so we strive to enable the discovery and appreciation of new photos.

To further that effort, today we are introducing similarity search on Flickr. If you hover over a photo on a search result page, you will reveal a “…” button that exposes a menu that gives you the option to search for photos similar to the photo you are currently viewing.

In many ways, photo search is very different from traditional web or text search. First, the goal of web search is usually to satisfy a particular information need, while with photo search the goal is often one of discovery; as such, it should be delightful as well as functional. We have taken this to heart throughout Flickr. For instance, our color search feature, which allows filtering by color scheme, and our style filters, which allow filtering by styles such as “minimalist” or “patterns,” encourage exploration. Second, in traditional web search, the goal is usually to match documents to a set of keywords in the query. That is, the query is in the same modality—text—as the documents being searched. Photo search usually matches across modalities: text to image. Text querying is a necessary feature of a photo search engine, but, as the saying goes, a picture is worth a thousand words. And beyond saving people the effort of so much typing, many visual concepts genuinely defy accurate description. Now, we’re giving our community a way to easily explore those visual concepts with the “…” button, a feature we call the similarity pivot.

The similarity pivot is a significant addition to the Flickr experience because it offers our community an entirely new way to explore and discover the billions of incredible photos and millions of incredible photographers on Flickr. It allows people to look for images of a particular style, it gives people a view into universal behaviors, and even when it “messes up,” it can force people to look at the unexpected commonalities and oddities of our visual world with a fresh perspective.

What is “similarity”?

To understand how an experience like this is powered, we first need to understand what we mean by “similarity.” There are many ways photos can be similar to one another. Consider some examples.

It is apparent that all of these groups of photos illustrate some notion of “similarity,” but each is different. Roughly, they are: similarity of color, similarity of texture, and similarity of semantic category. And there are many others that you might imagine as well.

What notion of similarity is best suited for a site like Flickr? Ideally, we’d like to be able to capture multiple types of similarity, but we decided early on that semantic similarity—similarity based on the semantic content of the photos—was vital to facilitate discovery on Flickr. This requires a deep understanding of image content for which we employ deep neural networks.

We have been using deep neural networks at Flickr for a while for various tasks such as object recognition, NSFW prediction, and even prediction of aesthetic quality. For these tasks, we train a neural network to map the raw pixels of a photo into a set of relevant tags, as illustrated below.

Internally, the neural network accomplishes this mapping incrementally by applying a series of transformations to the image, which can be thought of as a vector of numbers corresponding to the pixel intensities. Each transformation in the series produces another vector, which is in turn the input to the next transformation, until finally we have a vector that we specifically constrain to be a list of probabilities for each class we are trying to recognize in the image. To be able to go from raw pixels to a semantic label like “hot air balloon,” the network discards lots of information about the image, including information about  appearance, such as the color of the balloon, its relative position in the sky, etc. Instead, we can extract an internal vector in the network before the final output.

For common neural network architectures, this vector—which we call a “feature vector”—has many hundreds or thousands of dimensions. We can’t necessarily say with certainty that any one of these dimensions means something in particular as we could at the final network output, whose dimensions correspond to tag probabilities. But these vectors have an important property: when you compute the Euclidean distance between these vectors, images containing similar content will tend to have feature vectors closer together than images containing dissimilar content. You can think of this as a way that the network has learned to organize information present in the image so that it can output the required class prediction. This is exactly what we are looking for: Euclidian distance in this high-dimensional feature space is a measure of semantic similarity. The graphic below illustrates this idea: points in the neighborhood around the query image are semantically similar to the query image, whereas points in neighborhoods further away are not.

This measure of similarity is not perfect and cannot capture all possible notions of similarity—it will be constrained by the particular task the network was trained to perform, i.e., scene recognition. However, it is effective for our purposes, and, importantly, it contains information beyond merely the semantic content of the image, such as appearance, composition, and texture. Most importantly, it gives us a simple algorithm for finding visually similar photos: compute the distance in the feature space of a query image to each index image and return the images with lowest distance. Of course, there is much more work to do to make this idea work for billions of images.

Large-scale approximate nearest neighbor search

With an index as large as Flickr’s, computing distances exhaustively for each query is intractable. Additionally, storing a high-dimensional floating point feature vector for each of billions of images takes a large amount of disk space and poses even more difficulty if these features need to be in memory for fast ranking. To solve these two issues, we adopt a state-of-the-art approximate nearest neighbor algorithm called Locally Optimized Product Quantization (LOPQ).

To understand LOPQ, it is useful to first look at a simple strategy. Rather than ranking all vectors in the index, we can first filter a set of good candidates and only do expensive distance computations on them. For example, we can use an algorithm like k-means to cluster our index vectors, find the cluster to which each vector is assigned, and index the corresponding cluster id for each vector. At query time, we find the cluster that the query vector is assigned to and fetch the items that belong to the same cluster from the index. We can even expand this set if we like by fetching items from the next nearest cluster.

This idea will take us far, but not far enough for a billions-scale index. For example, with 1 billion photos, we need 1 million clusters so that each cluster contains an average of 1000 photos. At query time, we will have to compute the distance from the query to each of these 1 million cluster centroids in order to find the nearest clusters. This is quite a lot. We can do better, however, if we instead split our vectors in half by dimension and cluster each half separately. In this scheme, each vector will be assigned to a pair of cluster ids, one for each half of the vector. If we choose k = 1000 to cluster both halves, we have k2= 1000 * 1000 = 1e6 possible pairs. In other words, by clustering each half separately and assigning each item a pair of cluster ids, we can get the same granularity of partitioning (1 million clusters total) with only 2 * 1000 distance computations with half the number of dimensions for a total computational savings of 1000x. Conversely, for the same computational cost, we gain a factor of k more partitions of the data space, providing a much finer-grained index.

This idea of splitting vectors into subvectors and clustering each split separately is called product quantization. When we use this idea to index a dataset it is called the inverted multi-index, and it forms the basis for fast candidate retrieval in our similarity index. Typically the distribution of points over the clusters in a multi-index will be unbalanced as compared to a standard k-means index, but this unbalance is a fair trade for the much higher resolution partitioning that it buys us. In fact, a multi-index will only be balanced across clusters if the two halves of the vectors are perfectly statistically independent. This is not the case in most real world data, but some heuristic preprocessing—like PCA-ing and permuting the dimensions so that the cumulative per-dimension variance is approximately balanced between the halves—helps in many cases. And just like the simple k-means index, there is a fast algorithm for finding a ranked list of clusters to a query if we need to expand the candidate set.

After we have a set of candidates, we must rank them. We could store the full vector in the index and use it to compute the distance for each candidate item, but this would incur a large memory overhead (for example, 256 dimensional vectors of 4 byte floats would require 1Tb for 1 billion photos) as well as a computational overhead. LOPQ solves these issues by performing another product quantization, this time on the residuals of the data. The residual of a point is the difference vector between the point and its closest cluster centroid. Given a residual vector and the cluster indexes along with the corresponding centroids, we have enough information to reproduce the original vector exactly. Instead of storing the residuals, LOPQ product quantizes the residuals, usually with a higher number of splits, and stores only the cluster indexes in the index. For example, if we split the vector into 8 splits and each split is clustered with 256 centroids, we can store the compressed vector with only 8 bytes regardless of the number of dimensions to start (though certainly a higher number of dimensions will result in higher approximation error). With this lossy representation we can produce a reconstruction of a vector from the 8 byte codes: we simply take each quantization code, look up the corresponding centroid, and concatenate these 8 centroids together to produce a reconstruction. Likewise, we can approximate the distance from the query to an index vector by computing the distance between the query and the reconstruction. We can do this computation quickly for many candidate points by computing the squared difference of each split of the query to all of the centroids for that split. After computing this table, we can compute the squared difference for an index point by looking up the precomputed squared difference for each of the 8 indexes and summing them together to get the total squared difference. This caching trick allows us to quickly rank many candidates without resorting to distance computations in the original vector space.

LOPQ adds one final detail: for each cluster in the multi-index, LOPQ fits a local rotation to the residuals of the points that fall in that cluster. This rotation is simply a PCA that aligns the major directions of variation in the data to the axes followed by a permutation to heuristically balance the variance across the splits of the product quantization. Note that this is the exact preprocessing step that is usually performed at the top-level multi-index. It tends to make the approximate distance computations more accurate by mitigating errors introduced by assuming that each split of the vector in the production quantization is statistically independent from other splits. Additionally, since a rotation is fit for each cluster, they serve to fit the local data distribution better.

Below is a diagram from the LOPQ paper that illustrates the core ideas of LOPQ. K-means (a) is very effective at allocating cluster centroids, illustrated as red points, that target the distribution of the data, but it has other drawbacks at scale as discussed earlier. In the 2d example shown, we can imagine product quantizing the space with 2 splits, each with 1 dimension. Product Quantization (b) clusters each dimension independently and cluster centroids are specified by pairs of cluster indexes, one for each split. This is effectively a grid over the space. Since the splits are treated as if they were statistically independent, we will, unfortunately, get many clusters that are “wasted” by not targeting the data distribution. We can improve on this situation by rotating the data such that the main dimensions of variation are axis-aligned. This version, called Optimized Product Quantization (c), does a better job of making sure each centroid is useful. LOPQ (d) extends this idea by first coarsely clustering the data and then doing a separate instance of OPQ for each cluster, allowing highly targeted centroids while still reaping the benefits of product quantization in terms of scalability.

LOPQ is state-of-the-art for quantization methods, and you can find more information about the algorithm, as well as benchmarks, here. Additionally, we provide an open-source implementation in Python and Spark which you can apply to your own datasets. The algorithm produces a set of cluster indexes that can be queried efficiently in an inverted index, as described. We have also explored use cases that use these indexes as a hash for fast deduplication of images and large-scale clustering. These extended use cases are studied here.

Conclusion

We have described our system for large-scale visual similarity search at Flickr. Techniques for producing high-quality vector representations for images with deep learning are constantly improving, enabling new ways to search and explore large multimedia collections. These techniques are being applied in other domains as well to, for example, produce vector representations for text, video, and even molecules. Large-scale approximate nearest neighbor search has importance and potential application in these domains as well as many others. Though these techniques are in their infancy, we hope similarity search provides a useful new way to appreciate the amazing collection of images at Flickr and surface photos of interest that may have previously gone undiscovered. We are excited about the future of this technology at Flickr and beyond.

Acknowledgements

Yannis Kalantidis, Huy Nguyen, Stacey Svetlichnaya, Arel Cordero. Special thanks to the rest of the Computer Vision and Machine Learning team and the Vespa search team who manages Yahoo’s internal search engine.



This Month’s Writer’s Block

Published 7 Mar 2017 by Dave Robertson in Dave Robertson.

Share


W3C announces antitrust guidance document

Published 6 Mar 2017 by Wendy Seltzer in W3C Blog.

The W3C supports a community including more than 400 member organizations in developing Open Standards for the Open Web Platform. Many of these organizations are competitors in highly competitive markets. Others are researchers, consumers, and regulators. They come together in W3C Working Groups and Interest Groups to develop standards for interoperability: shared languages, formats, and APIs.

The W3C Process supports this work through a framework of consensus-based decision-making, a focus on technical requirements and interop testing, and our Royalty-Free Patent Policy.

As we’re joined by more participants from a wider range of industries, including Payments, Automotive, and Publishing, we wanted to highlight the role Process plays in helping competitors to work together fairly. Accordingly, we published a brief antitrust guidance document reflecting our existing practices.

Antitrust and competition law protect the public by requiring market competitors to act fairly in the marketplace. Open standards are pro-competitive and pro-user because an open, interoperable platform increases the opportunities for innovative competition in and on the Web. We continue to invite wide participation in the work of constructing these standards.


Beware Your Zips!

Published 4 Mar 2017 by Ipstenu (Mika Epstein) in Make WordPress Plugins.

Its not you, it’s Google.

A lot of people have been mentioning that Gmail won’t send emails if they have zips. Other people have no problem. And reading the list of filetypes that are blocked, it took me a while to figure out what was going on. Not only does Gmail block bad attachments, they also check in your zips to see what files are there:

Certain file types (listed below), including their compressed form (like .gz or .bz2 files) or when found within archives (like .zip or .tgz files)

And guess what filetype Gmail just added on as a banned attachment? `.js` files. Explains perfectly why some of you had no problem and others have massive ones, right? Right.

My advice is, and has been for quite a while now, to use GitHub or Gitlab or Bitbucket or some sort of true development version control system. They all generate their own zips and you can just link us to them. Plus if it’s really complicated to explain what’s wrong, we can highlight the code for you.

I strongly recommend you NOT use free download sources like mega file and all those other ones, especially if they offer faster downloads for money. The majority come with scam popups, viruses, and x-rated ads. Of which I have seen enough. Dropbox is free and has public links. Plus you all have your own websites and can upload a zip there if needed.

#notice


Week #7: 999 assists and no more kneeling

Published 4 Mar 2017 by legoktm in The Lego Mirror.

Joe Thornton is one assist away from reaching 1,000 in his career. He's a team player - the recognition of scoring a goal doesn't matter to him, he just wants his teammates to score. And his teammates want him to achieve this milestone too, as shown by Sharks passing to Thornton and him passing back instead of them going directly for the easy empty netter.

Oh, and now that the trade deadline has passed with no movement on the goalie front, it's time for In Jones We Trust:

via /u/MisterrAlex on reddit

In other news, Colin Kaepernick announced that he's going to be a free agent and opted out of the final year of his contract. But in even bigger news, he said he will stop kneeling for the national anthem. I don't know if he is doing that to make himself more marketable, but I wish he would have stood (pun intended) with his beliefs.


FastMail Customer Stories – CoinJar

Published 2 Mar 2017 by David Gurvich in FastMail Blog.

Welcome to our first Customer Story video for 2017 featuring CoinJar Co-Founder and CEO Asher Tan.

CoinJar is Australia’s largest Bitcoin exchange and wallet, and it was while participating in a startup accelerator program that Asher had the idea for creating an easier way to buy, sell and spend the digital currency Bitcoin.

“We had decided to work on some Bitcoin ideas in the consumer space, which were quite lacking at the time,” Asher says.

Participating in the startup process was instrumental in helping Asher and his Co-Founder Ryan Zhou to really hone in on what type of business they needed to build.

CoinJar launched in Melbourne in 2013 and despite experiencing rapid success, Asher is quick to point out that his is a tech business that’s still working within a very new industry.

“It’s a very new niche industry and finding what works as a business, what people want, I think is an ongoing process. You’re continually exploring, but I think that’s what makes it exciting,” Asher says.

Asher says that one of the great things about launching a startup is you can choose the tools you want. Initially starting out with another email provider, Asher and Ryan were soon underwhelmed by both the performance and cost.

“The UI was pretty slow, the package was pretty expensive as well. There was also a lack of flexibility of some of the tools we wanted to use … so we were looking for other options and FastMail came up,” Asher says.

And while most of CoinJar’s business tools are self-hosted, they decided that FastMail was going to be the best choice to meet their requirements for secure, reliable and private email hosting.

Today CoinJar has team members all around the world and uses FastMail’s calendar and timezone feature to keep everyone working together.

CoinJar continues to innovate, recently launching a debit card that allows their customers to buy groceries using Bitcoin.

We’d like to thank Asher for his time and also Ben from Benzen Video Productions for helping us to put this story together.

You can learn more about CoinJar at https://www.coinjar.com.au/.


Songs for the Beeliar Wetlands

Published 2 Mar 2017 by Dave Robertson in Dave Robertson.

The title track of the forthcoming Kiss List album has just been included on an awesome fundraising compilation of 17 songs by local songwriters for the Beeliar wetlands. All proceeds go to #rethinkthelink. Get it while its hot! You can purchase the whole album or just the songs you like.

Songs for the Beeliar Wetlands: Original Songs by Local Musicians (Volume 1) by Dave Robertson and The Kiss List

Share


Stepping Off Meets the Public

Published 1 Mar 2017 by Tom Wilson in tom m wilson.

At the start of February I launched my new book, Stepping Off: Rewilding and Belonging in the South-West, at an event at Clancy’s in Fremantle.  On Tuesday evening this week I was talking about the book down at Albany Library.     As I was in the area I decided to camp for a couple of […]

What’s new in the W3C Process 2017?

Published 1 Mar 2017 by Philippe le Hegaret in W3C Blog.

As of today, W3C is using a new W3C Process. You can read the full list of substantive changes but I’d like to highlight 2 changes that are relevant for the W3C community:

  1. Added a process to make a Recommendation Obsolete: An obsolete specification is one that the W3C community has decided should no longer be used. For example, it may no longer represent best practices, or it may not have received wide adoption and seems unlikely to do so in the future. The status of an obsolete specification remains active under the W3C Patent Policy, but it is not recommended for future implementation.
  2. Simplified the steps to publish Edited Recommendations if the new revision makes only editorial changes to the previous Recommendation. This allows W3C to make corrections to its Recommendations without requiring technical review of the proposed changes while keeping an objective to ensure adequate notice.

The W3C Process Document is developed by the W3C Advisory Board‘s Process Task Force working within the Revising W3C Process Community Group. Please send comments about our Process to public-w3process@w3.org.

We’re working on revamping and cleaning our entry page on Standards and Drafts and we’ll make sure to take those Process updates into account.


Digital Deli, reading history in the present tense

Published 1 Mar 2017 by Carlos Fenollosa in Carlos Fenollosa — Blog.

Digital Deli: The Comprehensive, User Lovable Menu Of Computer Lore, Culture, Lifestyles, And Fancy is an obscure book published in 1984. I found about it after learning that the popular Steve Wozniak article titled "Homebrew and How the Apple Came to Be" belonged to a compilation of short articles.

The book

I'm amazed that this book isn't more cherished by the retrocomputing community, as it provides an incredible insight into the state of computers in 1984. We've all read books about their history, but Digital Deli provides a unique approach: it's written in present tense.

Articles are written with a candid and inspiring narrative. Micro computers were new back then, and the authors could only speculate about how they might change the world in the future.

The book is adequately structured in sections which cover topics from the origins of computing, Silicon Valley startups, and reviews of specific systems. But the most interesting part for me are not the tech articles, but rather the sociological essays.

There are texts on how families welcome computers to the home, the applications of artificial intelligence, micros on Wall Street and computers on the classroom.

How the Source works

Fortunately, a copy of the book has been preserved online, and I highly encourage you to check it out and find some copies online

Besides Woz explaining how Apple was founded, don't miss out on Paul Lutus describing how he programmed AppleWriter in a cabin in the woods, Les Solomon envisioning the "magic box" of computing, Ted Nelson on information exchange and his Project Xanadu, Nolan Bushnell on video games, Bill Gates on software usability, the origins of the Internet... the list goes on and on.

Les Solomon

If you love vintage computing you will find a fresh perspective, and if you were alive during the late 70s and early 80s you will feel a big nostalgia hit. In any case, do yourself a favor, grab a copy of this book, and keep it as a manifesto of the greatest revolution in computer history.

Tags: retro, books

Comments? Tweet  


Web Content Accessibility Guidelines 2.1 First Public Working Draft

Published 28 Feb 2017 by Joshue O Connor in W3C Blog.

The Accessibility Guidelines Working Group (AG WG) is very happy to announce that the first public working draft of the new Web Content Accessibility Guidelines (WCAG) 2.1 is available. This new version aims to build effectively on the previous foundations of WCAG 2.0 with particular attention being given to the three areas of accessibility on small-screen and touch mobile devices, to users with low vision, and to users with cognitive or learning disabilities.

WCAG 2.0 is a well established vibrant standard with a high level of adoption worldwide. WCAG 2.0 is still broadly applicable to many old and new technologies covering a broad range of needs. However, technology doesn’t sleep and as it marches on brings new challenges for developers and users alike. WCAG 2.1 aims to address these diverse challenges in a substantial way. To do this, over the last three years the (newly renamed) AG WG undertook extensive research of the current user requirements for accessible content creation.

This work took place in task forces that brings together people with specific skills and expertise relating to these areas accessibility on mobile devices, users with low vision and users with cognitive or learning disabilities. Together this work forms the substantial basis of the new WCAG 2.1 draft.

WCAG 2.1 was initially described in the blog WCAG 2.1 under exploration, which proposed changing from an earlier model of WCAG 2.0 extensions to develop a dot-release of the guidelines. The charter to develop WCAG 2.1 was approved in January 2017. We are also happy to say that we have delivered the first public working draft within the charter’s promised timeline.

So what has the working group been doing? Working very hard looking at how to improve WCAG 2.0! To successfully iterate such a broad and deep standard has not been easy. There has been extensive research, discussion and debate within the task forces and the wider working group in order to better understand the interconnectedness and relationships between diverse and sometimes competing user requirements as we develop new success criteria.

This extensive work has resulted in the development of around 60 new success criteria, of which 28 are now included in this draft, to be used as measures of conformance to the standard. These success criteria have been collected from the three task forces as well as individual submissions. All of these success criteria must be vetted against the acceptance criteria before being formally accepted as part of the guidelines. As WCAG is an international standard and widely adopted the working group reviews everything very carefully, at this point only three new proposed success criteria have yet cleared the formal Working Group review process, and these are still subject to change based on public feedback. The draft also includes many proposed Success Criteria that are under consideration but have not yet been formally accepted by the Working Group.

Further review and vetting is necessary but we are very happy to present our work to the world. This is a first draft and not a final complete version. In addition to refining the accepted and proposed Success Criteria included in the draft, the Working Group will continue to review additional proposals which could appear formally in a future version. Through the course of the year, the AG WG plans to process the remaining success criteria along with the input we gather from the public. The group will then produce a semi-final version towards the end of this year along with further supporting “Understanding WCAG 2.1” (like Understanding WCAG 2.0) material.

There is no guarantee that a proposed success criterion appearing in this draft will make it to the final guidelines. Public feedback is really important to us—and based on this feedback the proposed success criteria could be iterated further. We want to hear from all users, authors, tool developers and policy makers about any benefits arising from the new proposed success criteria as well as how achievable you feel it is to conform to their requirements. The AG WG is working hard to ensure backwards compatibility between WCAG 2.1 and WCAG 2.0. However, the full extent and manner of how WCAG 2.1 will build on WCAG 2.0 is still being worked out.

The working group’s intention is for the new proposed success criteria to provide good additional coverage for users with cognitive or learning disabilities, low vision requirements, and users of mobile devices with small screens and touch interfaces. Mapping the delta between these diverse user requirements is rewarding and challenging and this WCAG 2.1 draft has been made possible by the diverse skills and experience brought to bear on this task by the AG WG members.

The AG WG also has a Accessibility Conformance Testing (ACT) Task Force that aims to develop a framework and repository of test rules, to promote a unified interpretation of WCAG among different web accessibility test tools; as well as a 3.0 guidelines project called ‘Silver’ that forecasts more significant changes following a research-focused, user-centered design methodology.

So while WCAG 2.1 is technically a “dot”-release, it is substantial in its reach yet also deliberately constrained to effectively build on the existing WCAG 2.0 framework and practically address issues for users today.


On EME in HTML5

Published 27 Feb 2017 by Tim Berners-Lee in W3C Blog.

The question which has been debated around the net is whether W3C should endorse the Encrypted Media Extensions (EME) standard which allows a web page to include encrypted content, by connecting an existing underlying Digital Rights Management (DRM) system in the underlying platform. Some people have protested “no”, but in fact I decided the actual logical answer is “yes”. As many people have been so fervent in their demonstrations, I feel I owe it to them to explain the logic. My hope is, as there are many things which need to be protested and investigated and followed up in this world, that the energy which has been expended on protesting EME can be re-channeled other things which really need it. Of the things they have argued along the way there have also been many things I have agreed with. And to understand the disagreement we need to focus the actual question, whether W3C should recommend EME.

The reason for recommending EME is that by doing so, we lead the industry who developed it in the first place to form a simple, easy to use way of putting encrypted content online, so that there will be interoperability between browsers. This makes it easier for web developers and also for users. People like to watch Netflix (to pick one example). People spend a lot of time on the web, they like to be able to embed Netflix content in their own web pages, they like to be able to link to it. They like to be able to have discussions where they express what they think about the content where their comments and the content can all be linked to.

Could they put the content on the web without DRM? Well, yes, for a huge amount of video content is on the web without DRM. It is only the big expensive movies where to put content on the web unencrypted makes it too easy for people to copy it, and in reality the utopian world of people voluntarily paying full price for content does not work. (Others argue that the whole copyright system should be dismantled, and they can do that in the legislatures and campaign to change the treaties, which will be a long struggle, and meanwhile we do have copyright).

Given DRM is a thing,…

When a company decides to distribute content they want to protect, they have many choices. This is important to remember.

If W3C did not recommend EME then the browser vendors would just make it outside W3C. If EME did not exist, vendors could just create new Javascript based versions. And without using the web at all, it is so easy to invite ones viewers to switching to view the content on a proprietary app. And if the closed platforms prohibited DRM in apps, then the large content providers would simply distribute their own set-top boxes and game consoles as the only way to watch their stuff.

If the Director Of The Consortium made a Decree that there would be No More DRM in fact nothing would change. Because W3C does not have any power to forbid anything. W3C is not the US Congress, or WIPO, or a court. It would perhaps have shortened the debate. But we would have been distracted from important things which need thought and action on other issues.

Well, could W3C make a stand and just because DRM is a bad thing for users, could just refuse to work on DRM and push back wherever they could on it? Well, that would again not have any effect, because the W3C is not a court or an enforcement agency. W3C is a place for people to talk, and forge consensus over great new technology for the web. Yes, there is an argument made that in any case, W3C should just stand up against DRM, but we, like Canute, understand our power is limited.

But importantly, there are reasons why pushing people away from web is a bad idea: It is better for users for the DRM to be done through EME than other ways.

  1. When the content is in a web page, it is part of the web.
  2. The EME system can ‘sandbox’ the DRM code to limit the damage it can do to the user’s system
  3. The EME system can ‘sandbox’ the DRM code to limit the damage it can do to the user’s privacy.

As mentioned above, when a provider distributes a movie, they have a lot of options. They have different advantages and disadvantages. An important issue here is how much the publisher gets to learn about the user.

So in summary, it is important to support EME as providing a relatively safe online environment in which to watch a movie, as well as the most convenient, and one which makes it a part of the interconnected discourse of humanity.

I should mention that the extent to which the sandboxing of the DRM code protects the user is not defined by the EME spec at all, although current implementations in at least Firefox and Chrome do sandbox the DRM.

Spread to other media

Do we worry that having put movies on the web, then content providers will want to switch also to use it for other media such as music and books? For music, I don’t think so, because we have seen industry move consciously from a DRM-based model to an unencrypted model, where often the buyer’s email address may be put in a watermark, but there is no DRM.

For books, yes this could be a problem, because there have been a large number of closed non-web devices which people are used to, and for which the publishers are used to using DRM. For many the physical devices have been replaced by apps, including DRM, on general purpose devices like closed phones or open computers. We can hope that the industry, in moving to a web model, will also give up DRM, but it isn’t clear.

We have talked about the advantages of different ways of using DRM in distributing movies. Now let us discuss some of the problems with DRM systems in general.

Problems with DRM

Much of this blog post is W3C’s technical perspective on EME which I provide wearing my Director’s hat – but in the following about DRM and the DMCA, that (since this is a policy issue), I am expressing my personal opinions.

Problems for users

There are many issues with DRM, from the user’s point of view. These have been much documented elsewhere. Here let me list these:

DRM systems are generally frustrating for users. Some of this can be compounded by things like attempts to region-code a licence so the user can only access when they are in a particular country, confusion between “buying” and “renting” something for a fixed term, and issues when content suppliers cease to exist, and all “bought” things become inaccessible.

Despite these issues, users continue to buy DRM-protected content.

Problems for developers

DRM prevents independent developers from building different playback systems that interact with the video stream, for example, to add accessibility features, such as speeding up or slowing down playback.

Problems for Posterity

There is a possibility that we end up in decades time with no usable record of these movies, because either their are still encrypted, or because people didn’t bother taking copies of them at the time because the copies would have been useless to them. One of my favorite suggestions is that anyone copyrighting a movie and distributing it encrypted in any way MUST deposit an unencrypted copy with a set of copyright libraries which would include the British Library, the Library of Congress, and the Internet Archive.

Problems with Laws

Much of the push back against EME has been based on push back against DRM which has been based on specific important problems with certain laws.

The law most discussed is the US Digital Millennium Copyright Act (DMCA). Other laws exist in other countries which to a greater or lesser extent resemble the DMCA. Some of these have been brought up in the discussions, but we do not have an exhaustive list or analysis of them. It is worth noting that US has spent a lot of energy using the various bilateral and multilateral agreements to persuade other countries into adopting laws like the DMCA. I do not go into the laws in other countries here. I do point out though that this cannot be dismissed as a USA-only problem. That said, let us go into the DMCA in more detail.

Whatever else you would like to change about the Copyright system as a whole, there are particular parts of the DMCA, specifically section 1201, which put innocent security researchers at risk of dire punishment if they are deemed to have thrown light on any DRM system.

There was an attempt at one point in the W3C process to refuse to bring the EME spec forward until all the working group participants would agree to indemnify security researchers under this section. To cut a very long story short, the attempt failed, and historians may point to the lack of leverage the EME spec had to be used in this way, and the difference between the set of companies in the working group and the set of companies which would be likely to sue over the DMCA, among other reasons.

Security researchers

There is currently (2017-02) a related effort at W3C to encourage companies to set up ‘bug bounty” programs to the extent that at least they guarantee immunity from prosecution to security researchers who find and report bugs in their systems. While W3C can encourage this, it can only provide guidelines, and cannot change the law. I encourage those who think this is important to help find a common set of best practice guidelines which companies will agree to. A first draft of some guidelines was announced. Please help make them effective and acceptable and get your company to adopt them.

Obviously a more logical thing would be to change the law, but the technical community seems to have become resigned to not being able to positive effect on the US legislative system due to well documented problems with that system.

This is something where public pressure could perhaps be beneficial, on the companies to agree on and adopt protection, not to mention changing the root cause in the DMCA. W3C would like to hear, by the way of any examples of security researchers having this sort of problem, so that we can all follow this.

The future web

The web has to be universal, to function at all. It has to be capable of holding crazy ideas of the moment, but also the well polished ideas of the century. It must be able to handle any language and culture. It must be able to include information of all types, and media of many genres. Included in that universality is that it must be able to support free stuff and for-pay stuff, as they are all part of this world. This means that it is good for the web to be able to include movies, and so for that, it is better for HTML5 to have EME than to not have it.

TimBL


v2.2.0

Published 26 Feb 2017 by fabpot in Tags from Twig.


v1.32.0

Published 26 Feb 2017 by fabpot in Tags from Twig.


Week #6: Barracuda win streak is great news for the Sharks

Published 24 Feb 2017 by legoktm in The Lego Mirror.

The San Jose Barracuda, the Sharks AHL affiliate team, is currently riding a 13 game winning streak, and is on top of the AHL — and that's great news for the Sharks.

Ever since the Barracuda moved here from Worcester, Mass., it's only been great news for the Sharks. Because they play in the same stadium, sending players up or down becomes as simple as a little paperwork and asking them to switch locker rooms, not cross-country flights.

This allows the Sharks to have a significantly deeper roster, since they can call up new players at a moment's notice. So the Barracuda's win streak is great news for Sharks fans, since it demonstrates how even the minor league players are ready to play in the pros.

And if you're watching hockey, be on the watch for Joe Thornton to score his 1,000 assist! (More on that next week).


How can I keep mediawiki not-yet-created pages from cluttering my google webmaster console with 404s?

Published 24 Feb 2017 by Sean in Newest questions tagged mediawiki - Webmasters Stack Exchange.

we have a mediawiki install as part of our site. As on all wikis people will add links for not yet created pages (red links). When followed these links return a 404 status (as there is no content) along with an invite to add content.

I'm not getting buried in 404 notices in google webmaster console for this site. Is there a best way to handle this?

Thanks for any help.


Cloudflare & FastMail: Your info is safe

Published 24 Feb 2017 by Helen Horstmann-Allen in FastMail Blog.

This week, Cloudflare disclosed a major security breach, affecting hundreds of thousands of services’ customer security. While FastMail uses Cloudflare, your information is safe, and it is not necessary to change your password.

The Cloudflare security breach affects services using Cloudflare to serve website information. When you go to our website (or read your email, or send your password), you are always connecting directly to a FastMail server. We use Cloudflare to serve domain name information only, which does not contain any sensitive or personal customer data.

However, while we do not advocate password reuse, we accept it happens. If your FastMail password is the same as any other web service you use, please change them both immediately (also, use a password manager, and enable two-step verification)! For more information about passwords and security, check out Lock Up Your Passwords and our password and security blog series, starting here.

For more information on the Cloudflare security breach, please check out their blog. Why does FastMail use Cloudflare? DDOSes that target our DNS can be mitigated with Cloudflare's capacity. If you have any other questions for us, please contact support.

This post had been amended to add remediation instructions in the third paragraph for users who may have a reused password.


The Other Half

Published 24 Feb 2017 by Jason Scott in ASCII by Jason Scott.

On January 19th of this year, I set off to California to participate in a hastily-arranged appearance in a UCLA building to talk about saving climate data in the face of possible administrative switchover. I wore a fun hat, stayed in a nice hotel, and saw an old friend from my MUD days for dinner. The appearance was a lot of smart people doing good work and wanting to continue with it.

While there, I was told my father’s heart surgery, which had some complications, was going to require an extended stay and we were running out of relatives and companions to accompany him. I booked a flight for seven hours after I’d arrive back in New York to go to North Carolina and stay with him. My father has means, so I stayed in a good nearby hotel room. I stayed with him for two and a half weeks, booking ten to sixteen hour days to accompany him through a maze of annoyances, indignities, smart doctors, variant nurses ranging from saints to morons, and generally ensure his continuance.

In the middle of this, I had a non-movable requirement to move the manuals out of Maryland and send them to California. Looking through several possibilities, I settled with: Drive five hours to Maryland from North Carolina, do the work across three days, and drive back to North Carolina. The work in Maryland had a number of people helping me, and involved pallet jacks, forklifts, trucks, and crazy amounts of energy drinks. We got almost all of it, with a third batch ready to go. I drove back the five hours to North Carolina and caught up on all my podcasts.

I stayed with my father another week and change, during which I dented my rental car, and hit another hard limit: I was going to fly to Australia. I also, to my utter horror, realized I was coming down with some sort of cold/flu. I did what I could – stabilized my father’s arrangements, went into the hotel room, put on my favorite comedians in a playlist, turned out the lights, drank 4,000mg of Vitamin C, banged down some orange juice, drank Mucinex, and covered myself in 5 blankets. I woke up 15 hours later in a pool of sweat and feeling like I’d crossed the boundary with that disease. I went back to the hospital to assure my dad was OK (he was), and then prepped for getting back to NY, where I discovered almost every flight for the day was booked due to so many cancelled flights the previous day.

After lots of hand-wringing, I was able to book a very late flight from North Carolina to New York, and stayed there for 5 hours before taking a 25 hour two-segment flight through Dubai to Melbourne.

I landed in Melbourne on Monday the 13th of February, happy that my father was stable back in the US, and prepping for my speech and my other commitments in the area.

On Tuesday I had a heart attack.

We know it happened then, or began to happen, because of the symptoms I started to show – shortness of breath, a feeling of fatigue and an edge of pain that covered my upper body like a jacket. I was fucking annoyed – I felt like I was just super tired and needed some energy, and energy drinks and caffiene weren’t doing the trick.

I met with my hosts for the event I’d do that Saturday, and continued working on my speech.

I attended the conference for that week, did a couple interviews, saw some friends, took some nice tours of preservation departments and discussed copyright with very smart lawyers from the US and Australia.

My heart attack continued, blocking off what turned out to be a quarter of my bloodflow to my heart.

This was annoying me but I didn’t know it was, so according to my fitbit I walked 25 miles, walked up 100 flights of stairs, and maintained hours of exercise to snap out of it, across the week.

I did a keynote for the conference. The next day I hosted a wonderful event for seven hours. I asked for a stool because I said I was having trouble standing comfortably. They gave me one. I took rests during it, just so the DJ could get some good time with the crowds. I was praised for my keeping the crowd jumping and giving it great energy. I’d now had been having a heart attack for four days.

That Sunday, I walked around Geelong, a lovely city near Melbourne, and ate an exquisite meal at Igni, a restaurant whose menu basically has one line to tell you you’ll be eating what they think you should have. Their choices were excellent. Multiple times during the meal, I dozed a little, as I was fatigued. When we got to the tram station, I walked back to the apartment to get some rest. Along the way, I fell to the sidewalk and got up after resting.

I slept off more of the growing fatigue and pain.

The next day I had a second exquisite meal of the trip at Vue Le Monde, a meal that lasted from about 8pm to midnight. My partner Rachel loves good meals and this is one of the finest you can have in the city, and I enjoyed it immensely. It would have been a fine last meal. I’d now had been experiencing a heart attack for about a week.

That night, I had a lot of trouble sleeping. The pain was now a complete jacket of annoyance on my body, and there was no way to rest that didn’t feel awful. I decided medical attention was needed.

The next morning, Rachel and I walked 5 blocks to a clinic, found it was closed, and walked further to the RealCare Health Clinic. I was finding it very hard to walk at this point. Dr. Edward Petrov saw me, gave me some therapy for reflux, found it wasn’t reflux, and got concerned, especially as having my heart checked might cost me something significant. He said he had a cardiologist friend who might help, and he called him, and it was agreed we could come right over.

We took a taxi over to Dr. Georg Leitl’s office. He saw me almost immediately.

He was one of those doctors that only needed to take my blood pressure and check my heart with a stethoscope for 30 seconds before looking at me sadly. We went to his office, and he told me I could not possibly get on the plane I was leaving on in 48 hours. He also said I needed to go to Hospital very quickly, and that I had some things wrong with me that needed attention.

He had his assistants measure my heart and take an ultrasound, wrote something on a notepad, put all the papers in an envelope with the words “SONNY PALMER” on them, and drove me personally over in his car to St. Vincent’s Hospital.

Taking me up to the cardiology department, he put me in the waiting room of the surgery, talked to the front desk, and left. I waited 5 anxious minutes, and then was bought into a room with two doctors, one of whom turned out to be Dr. Sonny Palmer.

Sonny said Georg thought I needed some help, and I’d be checked within a day. I asked if he’d seen the letter with his name on it. He hadn’t. He went and got it.

He came back and said I was going to be operated on in an hour.

He also explained I had a rather blocked artery in need of surgery. Survival rate was very high. Nerve damage from the operation was very unlikely. I did not enjoy phrases like survival and nerve damage, and I realized what might happen very shortly, and what might have happened for the last week.

I went back to the waiting room, where I tweeted what might have been my possible last tweets, left a message for my boss Alexis on the slack channel, hugged Rachel tearfully, and then went into surgery, or potential oblivion.

Obviously, I did not die. The surgery was done with me awake, and involved making a small hole in my right wrist, where Sonny (while blasting Bon Jovi) went in with a catheter, found the blocked artery, installed a 30mm stent, and gave back the blood to the quarter of my heart that was choked off. I listened to instructions on when to talk or when to hold myself still, and I got to watch my beating heart on a very large monitor as it got back its function.

I felt (and feel) legions better, of course – surgery like this rapidly improves life. Fatigue is gone, pain is gone. It was also explained to me what to call this whole event: a major heart attack. I damaged the heart muscle a little, although that bastard was already strong from years of high blood pressure and I’m very young comparatively, so the chances of recovery to the point of maybe even being healthier than before are pretty good. The hospital, St. Vincents, was wonderful – staff, environment, and even the food (incuding curry and afternoon tea) were a delight. My questions were answered, my needs met, and everyone felt like they wanted to be there.

It’s now been 4 days. I was checked out of the hospital yesterday. My stay in Melbourne was extended two weeks, and my hosts (MuseumNext and ACMI) paid for basically all of the additional AirBNB that I’m staying at. I am not cleared to fly until the two weeks is up, and I am now taking six medications. They make my blood thin, lower my blood pressure, cure my kidney stones/gout, and stabilize my heart. I am primarily resting.

I had lost a lot of weight and I was exercising, but my cholesterol was a lot worse than anyone really figured out. The drugs and lifestyle changes will probably help knock that back, and I’m likely to adhere to them, unlike a lot of people, because I’d already been on a whole “life reboot” kick. The path that follows is, in other words, both pretty clear and going to be taken.

Had I died this week, at the age of 46, I would have left behind a very bright, very distinct and rather varied life story. I’ve been a bunch of things, some positive and negative, and projects I’d started would have lived quite neatly beyond my own timeline. I’d have also left some unfinished business here and there, not to mention a lot of sad folks and some extremely quality-variant eulogies. Thanks to a quirk of the Internet Archive, there’s a little statue of me – maybe it would have gotten some floppy disks piled at its feet.

Regardless, I personally would have been fine on the accomplishment/legacy scale, if not on the first-person/relationships/plans scale. That my Wikipedia entry is going to have a different date on it than February 2017 is both a welcome thing and a moment to reflect.

I now face the Other Half, whatever events and accomplishments and conversations I get to engage in from this moment forward, and that could be anything from a day to 100 years.

Whatever and whenever that will be, the tweet I furiously typed out on cellphone as a desperate last-moment possible-goodbye after nearly a half-century of existence will likely still apply:

“I have had a very fun time. It was enormously enjoyable, I loved it all, and was glad I got to see it.”

 


Three take aways to understand Cloudflare's apocalyptic-proportions mess

Published 24 Feb 2017 by Carlos Fenollosa in Carlos Fenollosa — Blog.

It turns out that Cloudflare's proxies have been dumping uninitialized memory that contains plain HTTPS content for an indeterminate amount of time. If you're not familiar with the topic, let me summarize it: this is the worst crypto news in the last 10 years.

As usual, I suggest you read the HN comments to understand the scandalous magnitude of the bug.

If you don't see this as a news-opening piece on TV it only confirms that journalists know nothing about tech.

How bad is it, really? Let's see

I'm finding private messages from major dating sites, full messages from a well-known chat service, online password manager data, frames from adult video sites, hotel bookings. We're talking full HTTPS requests, client IP addresses, full responses, cookies, passwords, keys, data, everything

If the bad guys didn't find the bug before Tavis, you may be on the clear. However, as usual in crypto, you must assume that any data you submitted through a Cloudflare HTTPS proxy has been compromised.

Three take aways

A first take away, crypto may be mathematically perfect but humans err and the implementations are not. Just because something is using strong crypto doesn't mean it's immune to bugs.

A second take away, MITMing the entire Internet doesn't sound so compelling when you put it that way. Sorry to be that guy, but this only confirms that the centralization of the Internet by big companies is a bad idea.

A third take away, change all your passwords. Yep. It's really that bad. Your passwords and private requests may be stored somewhere, on a proxy or on a malicious actor's servers.

Well, at least change your banking ones, important services like email, and master passwords on password managers -- you're using one, right? RIGHT?

You can't get back any personal info that got leaked but at least you can try to minimize the aftershock.

Update: here is a provisional list of affected services. Download the full list, export your password manager data into a csv file, and compare both files by using grep -f sorted_unique_cf.txt your_passwords.csv.

Afterwards, check the list of potentially affected iOS apps

Let me conclude by saying that unless you were the victim of a targeted attack it's improbable that this bug is going to affect you at all. However, that small probability is still there. Your private information may be cached somewhere or stored on a hacker's server, waiting to be organized and leaked with a flashy slogan.

I'm really sorry about the overly dramatic post, but this time it's for real.

Tags: security, internet, news

Comments? Tweet  


Kubuntu 17.04 Beta 1 released for testers

Published 23 Feb 2017 by valorie-zimmerman in Kubuntu.

Today the Kubuntu team is happy to announce that Kubuntu Zesty Zapus (17.04) Beta 1 is released . With this Beta 1 pre-release, you can see and test what we are preparing for 17.04, which we will be releasing in April.

NOTE: This is Beta 1 Release. Kubuntu Beta Releases are NOT recommended for:

* Regular users who are not aware of pre-release issues
* Anyone who needs a stable system
* Anyone uncomfortable running a possibly frequently broken system
* Anyone in a production environment with data or work-flows that need to be reliable

Getting Kubuntu 17.04 Beta 1:
* Upgrade from 16.10: run `do-release-upgrade -d` from a command line.
* Download a bootable image (ISO) and put it onto a DVD or USB Drive : http://cdimage.ubuntu.com/kubuntu/releases/zesty/beta-1/

Release notes: https://wiki.ubuntu.com/ZestyZapus/Beta1/Kubuntu


DigitalOcean, Your Data, and the Cloudflare Vulnerability

Published 23 Feb 2017 by DigitalOcean in DigitalOcean Blog.

Over the course of the last several hours, we have received a number of inquiries about the Cloudflare vulnerability reported on February 23, 2017. Since the information release, we have been told by Cloudflare that none of our customer data has appeared in search caches. The DigitalOcean security team has done its own research into the issue, and we have not found any customer data present in the breach.

Out of an abundance of caution, DigitalOcean's engineering teams have reset all session tokens for our users, which will require that you log in again.

We recommend that you do the following to further protect your account:

Again, we would like to reiterate that there is no evidence that any customer data has been exposed as a result of this vulnerability, but we care about your security. So we are therefore taking this precaution as well as continuing to monitor the situation.


Nick Vigier, Director of Security


The localhost page isn’t working on MediaWiki

Published 23 Feb 2017 by hasanghaforian in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I want to use Widget PDF to embed PDF files on my MediaWiki pages. So at first installed Extension:Widgets on MediaWiki and it seems it is installed (I can see it in Installed extensions list in Special:Version of the Wiki). The I copied and pasted the entire source of the PDF widget code page into a page called Widget:PDF on my Wiki:

<noinclude>__NOTOC__
<big>This widget allows you to '''embed PDF files''' on your wiki page.</big>

Created by [https://wiki.karlsregion.net/z/User:Wilhelm_Bühler Wilhelm Bühler] and adapted by [https://www.wikihoster.net Karsten Hoffmeyer].

== Using this widget ==
For information on how to use this widget, see [https://www.mediawikiwidgets.org/PDF widget description page on MediaWikiWidgets.org].

== Copy to your site ==
To use this widget on your site, just install [https://www.mediawiki.org/wiki/Extension:Widgets MediaWiki Widgets extension] and copy the [{{fullurl:{{FULLPAGENAME}}|action=edit}} full source code] of this page to your wiki as page '''{{FULLPAGENAME}}'''.
</noinclude><includeonly><object class="pdf-widget" data="<!--{$url|validate:url}-->" type="application/pdf" wmode="transparent" style="z-index: 999; height: 100%; min-height: <!--{$height|escape:'html'|default:680}-->px; width: 100%; max-width: <!--{$width|escape:'html'|default:960}-->px;"><param name="wmode" value="transparent">
<p>Currently your browser does not use a PDF plugin. You may however <a href="<!--{$url|validate:url}-->">download the PDF file</a> instead.</p></object></includeonly>

My PDF file is under this URL:

http://localhost/<wiki-name>/index.php/File:GraphicsandAnimations-Devoxx2010.pdf

And it's name is File:GraphicsandAnimations-Devoxx2010.pdf. So as described here, I added this code to my Wiki page:

{{#widget:PDF
 |url=http://localhost/<wiki-name>/index.php/File:GraphicsandAnimations-Devoxx2010.pdf
 |width=750
 |height=1050
}}

But this error occured:

The localhost page isn’t working
localhost is currently unable to handle this request. 
HTTP ERROR 500

What I did:

  1. Also I tried this (original example of the Widget PDF)

    {{#widget:PDF
     |url=https://www.semantic-mediawiki.org/w/images/e/e9/SMW_quick_reference.pdf
     |width=750
     |height=1050
    }}
    

    But result was the same.

  2. I read Extension talk:Widgets but did not find any thing.

  3. I opened Chrome DevTools (Ctrl+Shift+I), but there was no error.

How I can solve the problem?

Edit:

After some times, I tried to uninstall Widget PDF and Extension:Widgets and reinstall them. So I removed Extension:Widgets files/folder from $IP/extensions/ and also deleted Widget:PDF page from Wiki. Then I installed Extension:Widgets again, but now, I can not open the Wiki pages at all (I see above error again), unless I delete require_once "$IP/extensions/Widgets/Widgets.php"; from LocalSettings.php. So I even cannot try to load Extension:Widgets.

Now I see this error in DevTools:

Failed to load resource: the server responded with a status of 500 (Internal Server Error)

Also after uninstalling Extension:Widgets, I tried Extension:PDFEmbed and unfortunately again I saw above error.


Updates to DigitalOcean Two-factor Authentication

Published 22 Feb 2017 by DigitalOcean in DigitalOcean Blog.

Today we'd like to talk about security.

We know how challenging it can be to balance security and usability. The user experience around security features can often feel like an afterthought, but we believe that shouldn't be the case. Usability is just as important when it comes to security as any other part of your product because added friction can lead users to make less-secure choices. Today, we want to share with you some updates we rolled out this week to our two-factor login features to make them easier to use.

Our previous version required both SMS and an authenticator app to enable two-factor authentication. While SMS can work in a crunch, it's no longer as secure as it once was, delivery for our international customers wasn't always reliable, and tying both methods for authentication to the same mobile device definitely wasn't a great experience for anyone whose phone was unavailable.

Our new two-factor authentication features allow developers to choose between an authenticator app or SMS as a primary method, and between downloadable codes, authenticator app, or SMS as backup methods. This way SMS stays an option, but isn't a necessary part of securing access to your DigitalOcean account.

Add backup methods

To take a look at the changes and enable it on your account, simply navigate to Settings and click the link in your profile to "Enable two-factor authentication."

Enable two-factor authentication

Making two-factor authentication a little easier and more broadly available is just a first step. We believe securing access to your infrastructure should be as simple as it is to spin up a few Droplets and a Load Balancer.

Do you have any suggestions for how we can help make security easier? We want to hear from you. We're already considering features like YubiKey support. What else would you like to see? Please reach out to us on our UserVoice or let us know in the comments below.


Nick Vigier - Director of Security
Josh Viney - Product Manager, Customer Experience


Editing MediaWiki pages in an external editor

Published 21 Feb 2017 by Sam Wilson in Sam's notebook.

I’ve been working on a MediaWiki gadget lately, for editing Wikisource authors’ metadata without leaving the author page. It’s fun working with and learning more about OOjs-UI, but it’s also a pain because gadget code is kept in Javascript pages in the MediaWiki namespace, and so every single time you want to change something it’s a matter of saving the whole page, then clicking ‘edit’ again, and scrolling back down to find the spot you were at. The other end of things—the re-loading of whatever test page is running the gadget—is annoying and slow enough, without having to do much the same thing at the source end too.

So I’ve added a feature to the ExternalArticles extension that allows a whole directory full of text files to be imported at once (namespaces are handled as subdirectories). More importantly, it also ‘watches’ the directories and every time a file is updated (i.e. with Ctrl-S in a text editor or IDE) it is re-imported. So this means I can have MediaWiki:Gadget-Author.js and MediaWiki:Gadget-Author.css open in PhpStorm, and just edit from there. I even have these files open inside a MediaWiki project and so autocompletion and documentation look-up works as usual for all the library code. It’s even quite a speedy set-up, luckily: I haven’t yet noticed having to wait at any time between saving some code, alt-tabbing to the browser, and hitting F5.

I dare say my bodged-together script has many flaws, but it’s working for me for now!


Mediawiki doesn't send any email

Published 19 Feb 2017 by fpiette in Newest questions tagged mediawiki - Ask Ubuntu.

My mediawiki installation (1.28.0, PHP 7.0.13) doesn't send any email and yet there is no error emitted. I checked using Special:EmailUser page.

What I have tryed: 1) A simple PHP script to send a mail using PHP's mail() function. It works. 2) I have turned PHP mail log. There is a normal line for each Mediawiki email "sent".

PHP is configured (correctly since it works) to send email using Linux SendMail. MediaWiki is not configured to use direct SMTP.

Any suggestion appreciated. Thanks.


Week #5: Politics and the Super Bowl – chewing a pill too big to swallow

Published 17 Feb 2017 by legoktm in The Lego Mirror.

For a little change, I'd like to talk about the impact of sports upon us this week. The following opinion piece was first written for La Voz, and can also be read on their website.

Super Bowl commercials have become the latest victim of extreme politicization. Two commercials stood out from the rest by featuring pro-immigrant advertisements in the midst of a political climate deeply divided over immigration law. Specifically, Budweiser aired a mostly fictional story of their founder traveling to America to brew, while 84 Lumber’s ad followed a mother and daughter’s odyssey to America in search of a better life.

The widespread disdain toward non-white outsiders, which in turn has created massive backlash toward these advertisements, is no doubt repulsive, but caution should also be exercised when critiquing the placement of such politicization. Understanding the complexities of political institutions and society are no doubt essential, yet it is alarming that every facet of society has become so politicized; ironically, this desire to achieve an elevated political consciousness actually turns many off from the importance of politics.

Football — what was once simply a calming means of unwinding from the harsh winds of an oppressive world — has now become another headline news center for political drama.

President George H. W. Bush and his wife practically wheeled themselves out of a hospital to prepare for hosting the game. New England Patriots owner, Robert Kraft, and quarterback, Tom Brady, received sharp criticism for their support of Donald Trump, even to the point of losing thousands of dedicated fans.

Meanwhile, the NFL Players Association publicly opposed President Trump’s immigration ban three days before the game, with the NFLPA’s president saying “Our Muslim brothers in this league, we got their backs.”

Let’s not forget the veterans and active service members that are frequently honored before NFL games, except that’s an advertisement too – the Department of Defense paid NFL teams over $5 million over four years for those promotions.

Even though it’s an America’s pastime, football, and other similar mindless outlets, provide the role of allowing us to escape whenever we need a break from reality, and for nearly three hours on Sunday, America got its break, except for those commercials. If we keep getting nagged about an issue, even if we’re generally supportive, t will eventually become incessant to the point of promoting nihilism.

When Meryl Streep spoke out at the Golden Globes, she turned a relaxing event of celebrating fawning into a political shitstorm which redirected all attention back toward Trump controversies. Even she was mostly correct, the efficacy becomes questionable after such repetition as many will become desensitized.

Politics are undoubtedly more important than ever now, but for our sanity’s sake, let’s keep it to a minimum in football. That means commercials too.


Kubuntu 16.04.2 LTS Update Available

Published 16 Feb 2017 by valorie-zimmerman in Kubuntu.

The second point release update to our LTS release 16.04 is out now. This contains all the bugfixes added to 16.04 since its first release in April. Users of 16.04 can run the normal update procedure to get these bugfixes. In addition, we suggest adding the Backports PPA to update to Plasma 5.8.5. Read more about it: http://kubuntu.org/news/plasma-5-8-5-bugfix-release-in-xenial-and-yakkety-backports-now/

Warning: 14.04 LTS to 16.04 LTS upgrades are problematic, and should not be attempted by the average user. Please install a fresh copy of 16.04.2 instead. To prevent messages about upgrading, change Prompt=lts with Prompt=normal or Prompt=never in the /etc/update-manager/release-upgrades file. As always, make a thorough backup of your data before upgrading.

See the Ubuntu 16.04.2 release announcement and Kubuntu Release Notes.

Download 16.04.2 images.


Load Balancers: Simplifying High Availability

Published 13 Feb 2017 by DigitalOcean in DigitalOcean Blog.

Over the past five years, we've seen our community grow by leaps and bounds, and we've grown right alongside it. More and more of our users are managing complex workloads that require more resilience and need to be highly available. Our Floating IPs already enable you to implement an architecture that eliminates single points of failure, but we knew we could do better by bringing our "DO-Simple" approach to the problem.

So today, we are releasing Load Balancers—a fully managed, highly available service that you can deploy as easily as a Droplet.

Our goal is to provide simple and intuitive tools that let your team launch, scale, and manage production applications of any size. With our Load Balancers, just choose a region and which Droplets will receive the traffic. We take care of the rest.

Load Balancers cost $20/month with no additional bandwidth charges and are available in all DigitalOcean regions.

Features

For more details, see this overview on our Community site.

Simplified Service Discovery

Your Load Balancer will distribute incoming traffic across your Droplets, allowing you to build more reliable and performant applications by creating redundancy. You can add target Droplets to a Load Balancer by either choosing specific Droplets, or choosing a tag used by a group of Droplets.

With tags, scaling your application horizontally becomes easy. Launch a new Droplet with the tag applied, and it will be automatically added to your Load Balancer's backend pool, ready to receive traffic. Remove the tag, and the Droplet will be removed from the backend pool.

Control panel

Get started by following this step-by-step guide on our Community site.

Security & SSL Options

We didn't forget about security! Here's how Load Balancers' measure up:

If you're configuring a Load Balancer instance to use SSL termination, keep in mind that any Droplet using Shared Private Networking connected to the Load Balancer will have traffic sent to its private IP. Otherwise, it will use the Droplet's public IP. (For full control and end-to-end encryption, choose the "SSL passthrough" option.)

Learn more about configuring either SSL termination or SSL passthrough with our Community tutorials.

Coming Soon

We already have many Load Balancer improvements planned. Some features you will see soon include:

Load Balancers are just the beginning. Our 2017 roadmap is focused on bringing the "DO-Simple" experience to more complex, production workloads. Your feedback will help us as we improve Load Balancers and roll out more features, including new storage, security, and networking capabilities. Let us know what you think in the comments!


Week #4: 500 for Mr. San Jose Shark

Published 9 Feb 2017 by legoktm in The Lego Mirror.

He did it: Patrick Marleau scored his 500th career goal. He truly is Mr. San Jose Shark.

I had the pleasure of attending the next home game on Saturday right after he reached the milestone in Vancouver, and nearly lost my voice chearing for Marleau. They mentioned his accomplishment once before the game and again during a break, and each time Marleau would only stand up and acknowledge the crowd cheering for him when he realized they would not stop until he did.

He's had his ups and downs, but he's truly a team player.

“I think when you hit a mark like this, you start thinking about everyone that’s helped you along the way,” Marleau said.

And on Saturday at home, Marleau assisted on both Sharks goals, helping out his teammates who had helped Marleau score his over the past two weeks.

Congrats Marleau, and thanks for the 20 years of hockey. Can't wait to see you raise the Cup.


Simpson and his Donkey – an exhibition

Published 9 Feb 2017 by carinamm in State Library of Western Australia Blog.

Illustrations by Frané Lessac and words by Mark Greenwood share the heroic story of John Simpson Kirkpatrick in the picture book Simpson and his Donkey.  The exhibition is on display at the State Library until  27 April. 

simpson
Unpublished spread 14 for pages 32 – 33
Collection of draft materials for Simpson and his Donkey, PWC/254/18 

The original illustrations, preliminary sketches and draft materials displayed in this exhibition form part of the State Library’s Peter Williams’ collection: a collection of original Australian picture book art.

Known as ‘the man with the donkey’, Simpson was a medic who rescued wounded soldiers at Gallipoli during World War I.

The bravery and sacrifice attributed to Simpson is now considered part of the ‘Anzac legend’. It is the myth and legend of John Simpson that Frané Lessac and Mark Greenwood tell in their book.

Frané Lessac and Mark Greenwood also travelled to Anzac Cove to explore where Simpson and Duffy had worked.  This experience and their research enabled them to layer creative interpretation over historical information and Anzac legend.

simpson2

On a moonless April morning, PWC254/6 

Frané Lessac is a Western Australian author-illustrator who has published over forty books for children. Frané speaks at festivals in Australia and overseas, sharing the process of writing and illustrating books. She often illustrates books by , Mark Greenwood, of which Simpson and his Donkey is just one example.

Simpson and his Donkey is published by Walker Books, 2008. The original illustrations are  display in the Story Place Gallery until 27 April 2017.

IMG_1233.JPG


Filed under: Children's Literature, community events, Exhibitions, Illustration, Picture Books, SLWA collections, SLWA displays, WA books and writers, WA history, Western Australia Tagged: children's literature, exhibitions, Frane Lessac, Mark Greenwood, Peter Williams collection, Simpson and his Donkey, State Library of Western Australia, The Story Place

Untitled

Published 7 Feb 2017 by Sam Wilson in Sam's notebook.

I’m heading to MediaWiki with Stevo.


Week #3: All-Stars

Published 2 Feb 2017 by legoktm in The Lego Mirror.

via /u/PAGinger on reddit

Last weekend was the NHL All-Star game and skills competition, with Brent Burns, Martin Jones, and Joe Pavelski representing the San Jose Sharks in Los Angeles. And to no one's surprise, they were all booed!

Pavelski scored a goal during the tournament for the Pacific Division, and Burns scored during the skills competition's "Four Line Challenge". But since they represented the Pacific, we have to talk about the impossible shot Mike Smith made.

And across the country, the 2017 NFL Pro Bowl (their all-star game) was happening at the same time. The Oakland Raiders had seven Pro Bowlers (tied for most from any team), and the San Francisco 49ers had...none.

In the meantime the 49ers managed to hire a former safety with no General Manager-related experience as their new GM. It's really not clear what Jed York, the 49ers owner, is trying out here, and why he would sign John Lynch to a six year contract.

But really, how much worse could it get for the 49ers?


Plugin Guideline Change

Published 30 Jan 2017 by Ipstenu (Mika Epstein) in Make WordPress Plugins.

With the advent of the new directory being on the horizon, which allows us to easily hard-limit the number of plugin tags displayed, we have taken the time to change the guidelines.

While minor updates to the guidelines (with regard to spelling, grammar, etc) are common, major changes are rare and we are striving to be more transparent about them. Hence this post 🙂

Guideline 12 (readme links) clarified to cover spam and tags.

The guideline now reads as follows:

12. Public facing pages on WordPress.org (readmes) may not spam.

Public facing pages, including readmes and translation files, may not be used to spam. Spammy behavior includes (but is not limited to) unnecessary affiliate links, tags to competitors plugins, use of over 12 tags total, blackhat SEO, and keyword stuffing.

Links to directly required products, such as themes or other plugins required for the plugin’s use, are permitted within moderation. Similarly, related products may be used in tags but not competitors. If a plugin is a WooCommerce extension, it may use the tag ‘woocommerce.’ However if the plugin is an alternative to Akismet, it may not use that term as a tag. Repetitive use of a tag or specific term is considered to be keyword stuffing, and is not permitted.

Write your readmes for people, not bots.

In all cases, affiliate links must be disclosed and must directly link to the affiliate service, not a redirect or cloaked URL.

The previous version had the title of “… may not contain “sponsored” or “affiliate” links or third party advertisements” which was too specific and yet not direct enough as to what the intent was. We sincerely mean “Do not use your readme to spam.” Tag abuse, keyword stuffing, and blackhat SEO practices are all spamming.

While we still ask you to use no more than 12 tags, once we move to the new directory, we will simply not display the overage. You should clean that up now. The code is such that there will not be a way to grant exceptions. This is by intent. You don’t need 30 tags, folks.

Guideline 13 (formerly number of tags) now references using included libraries

Since we no longer needed a separate guideline for tags, we have completely changed this guideline to address an issue of security.

13. The plugin should make use of WordPress’ default libraries.

WordPress includes a number of useful libraries, such as jQuery, Atom Lib, SimplePie, PHPMailer, PHPass, and more. For security and stability reasons, plugins may not include those libraries in their own code, but instead must use the versions of those libraries packaged with WordPress.

For a list of all javascript libraries included in WordPress, please review Default Scripts Included and Registered by WordPress.

This issue has become incredibly important when you consider that roughly 90 plugins had to be contacted and closed regarding the use of PHPMailer. They had included the entire library and not kept it updated. I’m aware that we use a forked version of that specific library and I have raised core trac ticket #39714 to address this issue.

While we do not (yet) have a public page to list all 3rd party libraries, I’ve raised meta trac ticket #2431 to hopefully get this sanely documented.

#guidelines


Updates to legoktm.com

Published 29 Jan 2017 by legoktm in The Lego Mirror.

Over the weekend I migrated legoktm.com and associated services over to a new server. It's powered by Debian Jessie instead of the slowly aging Ubuntu Trusty. Most services were migrated with no downtime by rsync'ing content over and the updating DNS. Only git.legoktm.com had some downtime due to needing to stop the service before copying over the database.

I did not migrate my IRC bouncer history or configuration, so I'm starting fresh. So if I'm no longer in a channel, feel free to PM me and I'll rejoin!

At the same time I moved the main https://legoktm.com/ homepage to MediaWiki. Hopefully that will encourage me to update the content on it more often.

Finally, the tor relay node I'm running was moved to a separate server entirely. I plan on increasing the resources allocated to it.


Kubuntu 17.04 Alpha 2 released for testers

Published 28 Jan 2017 by valorie-zimmerman in Kubuntu.

Today the Kubuntu team is happy to announce that Kubuntu Zesty Zapus (17.04) is released today. With this Alpha 2 pre-release, you can see what we are trying out in preparation for 17.04, which we will be releasing in April.

NOTE: This is Alpha 2 Release. Kubuntu Alpha Releases are NOT recommended for:

* Regular users who are not aware of pre-release issues
* Anyone who needs a stable system
* Anyone uncomfortable running a possibly frequently broken system
* Anyone in a production environment with data or work-flows that need to be reliable

Getting Kubuntu 17.04 Alpha 2
* Upgrade from 16.10, run do-release-upgrade from a command line.
* Download a bootable image (ISO) and put it onto a DVD or USB Drive


Week #2: NATTY HATTY FOR PATTY

Published 26 Jan 2017 by legoktm in The Lego Mirror.

The only person who would dare upstage Patrick Marleau's four goal night is Randy Hahn, with his hilarious call after Marleau's third goal to finish a natural hat-trick: "NATTY HATTY FOR PATTY". And after scoring another, Marleau became the first player to score four goals in a single period since the great Mario Lemieux did in 1997. He's also the third Shark to score four goals in a game, joining Owen Nolan (no video available, but his hat-trick from the 1997 All-Star game is fabulous) and Tomáš Hertl.

Marleau is also ready to hit his next milestone of 500 career goals - he's at 498 right now. Every impressive stat he puts up just further solidifies him as one of the greatest hockey players of his generation. But he's still missing the one achievement that all the greats need - a Stanley Cup. The Sharks made their first trip to the Stanley Cup Finals last year, but realistically had very little chance of winning; they simply were not the better team.

The main question these days is how long Marleau and Joe Thornton will keep playing for, and if they can stay healthy until they eventually win that Stanley Cup.

Discuss this post on Reddit.


Bromptons in Museums and Art Galleries

Published 23 Jan 2017 by Andy Mabbett in Andy Mabbett, aka pigsonthewing.

Every time I visit London, with my Brompton bicycle of course, I try to find time to take in a museum or art gallery. Some are very accommodating and will cheerfully look after a folded Brompton in a cloakroom (e.g. Tate Modern, Science Museum) or, more informally, in an office or behind the security desk (Bank of England Museum, Petrie Museum, Geffrye Museum; thanks folks).


Brompton bicycle folded

When folded, Brompton bikes take up very little space

Others, without a cloakroom, have lockers for bags and coats, but these are too small for a Brompton (e.g. Imperial War Museum, Museum of London) or they simply refuse to accept one (V&A, British Museum).

A Brompton bike is not something you want to chain up in the street, and carrying a hefty bike-lock would defeat the purpose of the bike’s portability.


Jack Wills, New Street (geograph 4944811)

This Brompton bike hire unit, in Birmingham, can store ten folded bikes each side. The design could be repurposed for use at venues like museums or galleries.

I have an idea. Brompton could work with museums — in London, where Brompton bikes are ubiquitous, and elsewhere, though my Brompton and I have never been turned away from a museum outside London — to install lockers which can take a folded Brompton. These could be inside with the bag lockers (preferred) or outside, using the same units as their bike hire scheme (pictured above).

Where has your Brompton had a good, or bad, reception?

Update

Less than two hours after I posted this, Will Butler-Adams, MD of Brompton, >replied to me on Twitter:

so now I’m reaching out to museums, in London to start with, to see who’s interested.

The post Bromptons in Museums and Art Galleries appeared first on Andy Mabbett, aka pigsonthewing.


Running with the Masai

Published 23 Jan 2017 by Tom Wilson in tom m wilson.

What are you going to do if you like tribal living and you’re in the cold winter of the Levant?  Head south to the Southern Hemisphere, and to the wilds of Africa. After leaving Israel and Jordan that is exactly what I did. I arrived in Nairobi and the first thing which struck me was […]

Week #1: Who to root for this weekend

Published 22 Jan 2017 by legoktm in The Lego Mirror.

For the next 10 weeks I'll be posting sports content related to Bay Area teams. I'm currently taking an intro to features writing class, and we're required to keep a blog that focuses on a specific topic. I enjoy sports a lot, so I'll be covering Bay Area sports teams (Sharks, Earthquakes, Raiders, 49ers, Warriors, etc.). I'll also be trialing using Reddit for comments. If it works well, I'll continue using it for the rest of my blog as well. And with that, here goes:

This week the Green Bay Packers will be facing the Atlanta Falcons in the very last NFL game at the Georgia Dome for the NFC Championship. A few hours later, the Pittsburgh Steelers will meet the New England Patriots in Foxboro competing for the AFC Championship - and this will be only the third playoff game in NFL history featuring two quarterbacks with multiple Super Bowl victories.

Neither Bay Area football team has a direct stake in this game, but Raiders and 49ers fans have a lot to root for this weekend.

49ers: If you're a 49ers fan, you want to root for the Falcons to lose. This might sound a little weird, but currently the 49ers are looking to hire Falcons offensive coordinator, Kyle Shanahan, as their new head coach. However, until the Falcons' season ends, they cannnot officially hire him. And since 49ers general manager search depends upon having a head coach, they can get a head start by two weeks if the Falcons lose this weekend.

Raiders: Do you remember the Tuck Rule Game? If so, you'll still probably be rooting for anyone but Tom Brady, quarterback for the Patriots. If not, well, you'll probably want to root for the Steelers, who eliminated Raiders' division rival Kansas City Chiefs last weekend in one of the most bizarre playoff games. Even though the Steelers could not score a single touchdown, they topped the Chiefs two touchdowns with a record six field goals. Raiders fans who had to endure two losses to the Chiefs this season surely appreciated how the Steelers embarrassed the Chiefs on prime time television.

Discuss this post on Reddit.


Four Stars of Open Standards

Published 21 Jan 2017 by Andy Mabbett in Andy Mabbett, aka pigsonthewing.

I’m writing this at UKGovCamp, a wonderful unconference. This post constitutes notes, which I will flesh out and polish later.

I’m in a session on open standards in government, convened by my good friend Terence Eden, who is the Open Standards Lead at Government Digital Service, part of the United Kingdom government’s Cabinet Office.

Inspired by Tim Berners-Lee’s “Five Stars of Open Data“, I’ve drafted “Four Stars of Open Standards”.

These are:

  1. Publish your content consistently
  2. Publish your content using a shared standard
  3. Publish your content using an open standard
  4. Publish your content using the best open standard

Bonus points for:

Point one, if you like is about having your own local standard — if you publish three related data sets for instance, be consistent between them.

Point two could simply mean agreeing a common standard with other items your organisation, neighbouring local authorities, or suchlike.

In points three and four, I’ve taken “open” to be the term used in the “Open Definition“:

Open means anyone can freely access, use, modify, and share for any purpose (subject, at most, to requirements that preserve provenance and openness).

Further reading:

The post Four Stars of Open Standards appeared first on Andy Mabbett, aka pigsonthewing.


2017: What's Shipping Next on DigitalOcean

Published 17 Jan 2017 by DigitalOcean in DigitalOcean Blog.

The start of a new year is a great opportunity to reflect on the past twelve months. At the beginning of 2016, I began advising the team at DigitalOcean and I knew the company and the products were something special. I joined DigitalOcean as the CTO in June 2016 and our engineering team was scaling rapidly, teams were organizing around new product initiatives, and we were gearing up for the second product to be shipped in our company's history: Block Storage.

Going from one great product to two in 2016 was a major shift for DigitalOcean and the start of what's going to be an exciting year of new capabilities to support larger production workloads in 2017.

2016 achievements

The "DO-Simple" Way

In the coming year, we are not only strengthening the foundation of our platform to increase performance and enable our customers to scale, we are also broadening our product portfolio to offer services we know teams of developers need. However, we are not just bringing new products and features to market; we are ensuring that what we offer maintains the "DO-Simple" standard that our customers expect and appreciate.

What does DO-Simple mean? At DigitalOcean, we are committed to sticking to our mission to simplify infrastructure and create an experience that developers love. We are challenging the status quo and disrupting the way developers think about using the cloud. This is an exciting chapter for our company and something we believe sets us apart in the market. We want developers to focus on building their applications, not waste time and money on setting up, configuring, and monitoring. Writing great software is hard. The cloud that software runs on should be easy.

2017 Product Horizon

With distributed systems spread over thousands of servers in 12 datacenters across the world, we have valuable operational knowledge on managing infrastructure at scale. We believe our users can leverage the work we do in-house to manage their own infrastructure. Just this month, we released an open source agent that lets developers get a better picture of the health of their Droplets. We also added several new graphs to the Droplet graphs page and made the existing graphs much more precise. Having visibility into your infrastructure is only the first step, knowing when to act on that information is just as important. That's why later this quarter, we will be releasing additional monitoring capabilities and tools to better manage your Droplets in the DO-Simple way you expect. (Learn more about Monitoring on DigitalOcean.)

As we approach one million registered users and more than 40,000 teams of developers over the last 5 years, it is critical that we give our users the tools, scale and performance that are required to seamlessly launch, scale and manage any size production application. We have more and more customers managing complex workloads and large environments on DigitalOcean that would benefit from a Load Balancer. You can now request early access to Load Balancers on DigitalOcean here.

We aren't stopping at just adding load balancing to our offerings in 2017. We have a number of important capabilities we're working on to to meet your high availability, data storage, security, and networking needs. Additionally, we will continue to iterate and invest in our Block Storage offering by making it available in more datacenter locations around the world.

Feedback Matters

We believe in building a customer-first organization that is committed to transparency. Therefore, I will continue to share more updates to our roadmap throughout the year. We have an iterative product development approach and engage our customers in many ways as part of the product prioritization and design process. The developer's voice matters at DigitalOcean. We don't assume that we have all the answers. Talking with and listening to the people who use our cloud day in and day out plays a major role in creating the simple and intuitive developer experience we strive to maintain. In the months to come, we will be engaging our customers through each product beta and general release.

Excited about what's coming? Have ideas about what we should do next? Share your thoughts with us in the comments below.

Happy coding,

Julia Austin, CTO


Supporting Software Freedom Conservancy

Published 17 Jan 2017 by legoktm in The Lego Mirror.

Software Freedom Conservancy is a pretty awesome non-profit that does some great stuff. They currently have a fundraising match going on, that was recently extended for another week. If you're able to, I think it's worthwhile to support their organization and mission. I just renewed my membership.

Become a Conservancy Supporter!


A Doodle in the Park

Published 16 Jan 2017 by Dave Robertson in Dave Robertson.

The awesome Carolyn White is doing a doodle a day, but in this case it was a doodle of Dave, with Tore and The Professor, out in the summer sun of the Manning Park Farmers and Artisan Market.

Share


MediaWiki - powered by Debian

Published 16 Jan 2017 by legoktm in The Lego Mirror.

Barring any bugs, the last set of changes to the MediaWiki Debian package for the stretch release landed earlier this month. There are some documentation changes, and updates for changes to other, related packages. One of the other changes is the addition of a "powered by Debian" footer icon (drawn by the amazing Isarra), right next to the default "powered by MediaWiki" one.

Powered by Debian

This will only be added by default to new installs of the MediaWiki package. But existing users can just copy the following code snippet into their LocalSettings.php file (adjust paths as necessary):

# Add a "powered by Debian" footer icon
$wgFooterIcons['poweredby']['debian'] = [
    "src" => "/mediawiki/resources/assets/debian/poweredby_debian_1x.png",
    "url" => "https://www.debian.org/",
    "alt" => "Powered by Debian",
    "srcset" =>
        "/mediawiki/resources/assets/debian/poweredby_debian_1_5x.png 1.5x, " .
        "/mediawiki/resources/assets/debian/poweredby_debian_2x.png 2x",
];

The image files are included in the package itself, or you can grab them from the Git repository. The source SVG is available from Wikimedia Commons.


v2.1.0

Published 11 Jan 2017 by fabpot in Tags from Twig.


v1.31.0

Published 11 Jan 2017 by fabpot in Tags from Twig.


Importing pages breaks category feature

Published 10 Jan 2017 by Paul in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I just installed MediaWiki 1.27.1 and setup completes without issue on a server with Ubuntu 16.04, nginx, PHP 5.6, and MariaDB 10.1.

I created an export file with a different wiki using the Special:Export page. I then imported the articles to the new wiki using the Special:Import page. The file size is smaller than any limits and time the operation takes to complete is much less than configured timeouts.

Before import, I have created articles and categories and everything works as expected.

However, after importing, when I create a category tag on an article, clicking the link to the category's page doesn't show the article in the category.

I am using this markup within the article to create the category:

[[Category:Category Name]]

Is this a bug or am I missing something?


Teacup – One Boy’s Story of Leaving His Homeland

Published 8 Jan 2017 by carinamm in State Library of Western Australia Blog.

slwa_b4638726_23

“Once there was a boy who had to leave home …and find another. In his bag he carried a book, a bottle and a blanket. In his teacup he held some earth from where he used to play”

A musical performance adapted from the picture book Teacup written by Rebecca Young and illustrated Matt Ottley, will premiere at the State Library of Western Australia as part of Fringe Festival. 

Accompanied by musicians from Perth chamber music group Chimera Ensemble, Music Book’s Narrator Danielle Joynt and Lark Chamber Opera’s soprano composer Emma Jayakumar, the presentation of Teacup will be a truly ‘multi-modal’ performance, where the music of Matt Ottley will ‘paint’ the colours, scenery and words into life.

Performance Times:

Fri 27 January 2:30pm
Sat 28 January 10:30am, 1pm and 2:30pm
Sun 29 January 10:30am, 1pm and 2:30pm

Matt Ottley’s original paintings from the picture book Teacup from part of the State Library’s Peter Williams collection of original picture book art. The artworks will be displayed in  Teacup – an exhibition in the ground floor gallery between 20 January – 24 March 2017.

Image credit: Cover illustration for Teacup, Matt Ottley, 2015. State Library of Western Australia, PWC/255/01  Reproduced in the book Teacup written by Rebecca Young with illustrations by Matt Ottley. Published by Scholastic, 2015.

This event is supported by the City of Perth 


Filed under: Children's Literature, community events, Concerts, Exhibitions, Illustration, Music, SLWA collections, SLWA displays, SLWA events, SLWA Exhibitions, Uncategorized Tagged: exhibitions, Matt Ottley, Music Book Stories Inc., Peter Williams collection, State Library of Western Australia, Teacup - One Boy's Story of Leaving His Homeland

Big Tribes

Published 5 Jan 2017 by Tom Wilson in tom m wilson.

In Jerusalem yesterday I encountered three of the most sacred sites of some of the biggest religions on earth. First the Western Wall, the most sacred site for Jews worldwide. Then after some serious security checks and long wait in a line we were allowed up a long wooden walkway, up to the Temple Mount.   […]

A Year Without a Byte

Published 4 Jan 2017 by Archie Russell in code.flickr.com.

One of the largest cost drivers in running a service like Flickr is storage. We’ve described multiple techniques to get this cost down over the years: use of COS, creating sizes dynamically on GPUs and perceptual compression. These projects have been very successful, but our storage cost is still significant.
At the beginning of 2016, we challenged ourselves to go further — to go a full year without needing new storage hardware. Using multiple techniques, we got there.

The Cost Story

A little back-of-the-envelope math shows storage costs are a real concern. On a very high-traffic day, Flickr users upload as many as twenty-five million photos. These photos require an average of 3.25 megabytes of storage each, totalling over 80 terabytes of data. Stored naively in a cloud service similar to S3, this day’s worth of data would cost over $30,000 per year, and continue to incur costs every year.

And a very large service will have over two hundred million active users. At a thousand images each, storage in a service similar to S3 would cost over $250 million per year (or $1.25 / user-year) plus network and other expenses. This compounds as new users sign up and existing users continue to take photos at an accelerating rate. Thankfully, our costs, and every large service’s costs, are different than storing naively at S3, but remain significant.



Cost per byte have decreased, but bytes per image from iPhone-type platforms have increased. Cost per image hasn’t changed significantly.

Storage costs do drop over time. For example, S3 costs dropped from $0.15 per gigabyte month in 2009 to $0.03 per gigabyte-month in 2014, and cloud storage vendors have added low-cost options for data that is infrequently accessed. NAS vendors have also delivered large price reductions.

Unfortunately, these lower costs per byte are counteracted by other forces. On iPhones, increasing camera resolution, burst mode and the addition of short animations (Live Photos) have increased bytes-per-image rapidly enough to keep storage cost per image roughly constant. And iPhone images are far from the largest.

In response to these costs, photo storage services have pursued a variety of product options. To name a few: storing lower quality images or re-compressing, charging users for their data usage, incorporating advertising, selling associated products such as prints, and tying storage to purchases of handsets.

There are also a number of engineering approaches to controlling storage costs. We sketched out a few and cover three that we implemented below: adjusting thresholds on our storage systems, rolling out existing savings approaches to more images, and deploying lossless JPG compression.

Adjusting Storage Thresholds

As we dug into the problem, we looked at our storage systems in detail. We discovered that our settings were based on assumptions about high write and delete loads that didn’t hold. Our storage is pretty static. Users only rarely delete or change images once uploaded. We also had two distinct areas of just-in-case space. 5% of our storage was reserved space for snapshots, useful for undoing accidental deletes or writes, and 8.5% was held free in reserve. This resulted in about 13% of our storage going unused. Trade lore states that disks should remain 10% free to avoid performance degradation, but we found 5% to be sufficient for our workload. So we combined our our two just-in-case areas into one and reduced our free space threshold to that level. This was our simplest approach to the problem (by far), but it resulted in a large gain. With a couple simple configuration changes, we freed up more than 8% of our storage.



Adjusting storage thresholds

Extending Existing Approaches

In our earlier posts, we have described dynamic generation of thumbnail sizes and perceptual compression. Combining the two approaches decreased thumbnail storage requirements by 65%, though we hadn’t applied these techniques to many of our images uploaded prior to 2014. One big reason for this: large-scale changes to older files are inherently risky, and require significant time and engineering work to do safely.

Because we were concerned that further rollout of dynamic thumbnail generation would place a heavy load on our resizing infrastructure, we targeted only thumbnails from less-popular images for deletes. Using this approach, we were able to handle our complete resize load with just four GPUs. The process put a heavy load on our storage systems; to minimize the impact we randomized our operations across volumes. The entire process took about four months, resulting in even more significant gains than our storage threshold adjustments.



Decreasing the number of thumbnail sizes

Lossless JPG Compression

Flickr has had a long-standing commitment to keeping uploaded images byte-for-byte intact. This has placed a floor on how much storage reduction we can do, but there are tools that can losslessly compress JPG images. Two well-known options are PackJPG and Lepton, from Dropbox. These tools work by decoding the JPG, then very carefully compressing it using a more efficient approach. This typically shrinks a JPG by about 22%. At Flickr’s scale, this is significant. The downside is that these re-compressors use a lot of CPU. PackJPG compresses at about 2MB/s on a single core, or about fifteen core-years for a single petabyte worth of JPGs. Lepton uses multiple cores and, at 15MB/s, is much faster than packJPG, but uses roughly the same amount of CPU time.

This CPU requirement also complicated on-demand serving. If we recompressed all the images on Flickr, we would need potentially thousands of cores to handle our decompress load. We considered putting some restrictions on access to compressed images, such as requiring users to login to access original images, but ultimately found that if we targeted only rarely accessed private images, decompressions would occur only infrequently. Additionally, restricting the maximum size of images we compressed limited our CPU time per decompress. We rolled this out as a component of our existing serving stack without requiring any additional CPUs, and with only minor impact to user experience.

Running our users’ original photos through lossless compression was probably our highest-risk approach. We can recreate thumbnails easily, but a corrupted source image cannot be recovered. Key to our approach was a re-compress-decompress-verify strategy: every recompressed image was decompressed and compared to its source before removing the uncompressed source image.

This is still a work-in-progress. We have compressed many images but to do our entire corpus is a lengthy process, and we had reached our zero-new-storage-gear goal by mid-year.

On The Drawing Board

We have several other ideas which we’ve investigated but haven’t implemented yet.

In our current storage model, we have originals and thumbnails available for every image, each stored in two datacenters. This model assumes that the images need to be viewable relatively quickly at any point in time. But private images belonging to accounts that have been inactive for more than a few months are unlikely to be accessed. We could “freeze” these images, dropping their thumbnails and recreate them when the dormant user returns. This “thaw” process would take under thirty seconds for a typical account. Additionally, for photos that are private (but not dormant), we could go to a single uncompressed copy of each thumbnail, storing a compressed copy in a second datacenter that would be decompressed as needed.

We might not even need two copies of each dormant original image available on disk. We’ve pencilled out a model where we place one copy on a slower, but underutilized, tape-based system while leaving the other on disk. This would decrease availability during an outage, but as these images belong to dormant users, the effect would be minimal and users would still see their thumbnails. The delicate piece here is the placement of data, as seeks on tape systems are prohibitively slow. Depending on the details of what constitutes a “dormant” photo these techniques could comfortably reduce storage used by over 25%.

We’ve also looked into de-duplication, but we found our duplicate rate is in the 3% range. Users do have many duplicates of their own images on their devices, but these are excluded by our upload tools.  We’ve also looked into using alternate image formats for our thumbnail storage.    WebP can be much more compact than ordinary JPG but our use of perceptual compression gets us close to WebP byte size and permits much faster resize.  The BPG project proposes a dramatically smaller, H.265 based encoding but has IP and other issues.

There are several similar optimizations available for videos. Although Flickr is primarily image-focused, videos are typically much larger than images and consume considerably more storage.

Conclusion



Optimization over several releases

Since 2013 we’ve optimized our usage of storage by nearly 50%.  Our latest efforts helped us get through 2016 without purchasing any additional storage,  and we still have a few more options available.

Peter Norby, Teja Komma, Shijo Joy and Bei Wu formed the core team for our zero-storage-budget project. Many others assisted the effort.



Improved Graphs: Powered by the Open Source DO Agent

Published 3 Jan 2017 by DigitalOcean in DigitalOcean Blog.

At DigitalOcean, we want to make monitoring the services you've deployed simple and easy. As engineers, we know that having greater insight into the machines running in your fleet increases the speed at which you can troubleshoot issues.

That's why we're excited to launch new and improved memory and disk space graphs! We've gathered the knowledge that we've learned involving telemetry and performance observability and poured it into an open-source project called do-agent. This monitoring application helps you get a better picture of the health of your Droplets by adding several new graphs to the Droplet graphs page and making the existing graphs much more precise.

New graphs

To get these graphs, you'll need to have the new agent. On new Droplets, just click the Monitoring checkbox during Droplet creation.

Select monitoring

On existing Droplets, you can install the agent by running:

curl -sSL https://agent.digitalocean.com/install.sh | sh

Or get all the details in this tutorial on the DigitalOcean community site.

How Does do-agent Work?

do-agent is a lightweight application which runs on Droplets and periodically collects system performance/state metrics. The collected metrics are immediately transmitted to the monitoring API endpoints and made available to you via the Droplet graphs page.

When we began thinking of do-agent, security was one of our top priorities; we wanted to take great care not to collect any data that may be considered private. How could we collect the metrics we felt were necessary with an agent that would require the minimum amount of resources and security privileges?

We chose to collect system information from the /proc pseudo filesystem, which contains everything from CPU metrics to Linux kernel versions. In true Unix fashion, /proc presents system information laid out as files on the filesystem; the hierarchy determines the information you are attempting to access. The greatest benefit we gain from using /proc is the ability to access this information as a very low-privileged user.

The /proc files are read and converted into metrics that are transmitted via gRPC to a metrics endpoint. The agent authenticates as belonging to your Droplet and tags all of your data with the Droplet ID.

What's Next?

This new agent opens up many possibilities for future tools that will provide insight into Droplet performance. We're not stopping here! Currently, we're working on a suite of tools which will enable engineers to collectively monitor groups of Droplets instead of individual Droplets.

do-agent also has a plugin architecture built in. We don't have any plugins written yet, but this architecture enables us to create them to observe more than just Droplet metrics; you could potentially collect other software performance metrics running on or alongside your software.

The Prometheus project was a great inspiration and model for this project (and is used in the agent itself), and the ability for you to install plugins to collect arbitrary metrics was inspired by the Munin open-source project. do-agent is itself open source, and we welcome contributions!

We're excited about the possibilities these graphs and this agent open up for us. If you are too, sign up to be the first to know as we begin to roll out new monitoring and alerting features early this year.


Impressions of Jerusalem and Tel Aviv

Published 3 Jan 2017 by Tom Wilson in tom m wilson.

Arriving in Israel… Coming over the border from Jordan it was forbidding and stern – as though I was passing through a highly militarised zone, which indeed I was. Machine gun towers, arid, blasted dune landscape, and endless security checks and waiting about. Then I was in the West Bank. The first thing I noticed […]

Jerash

Published 29 Dec 2016 by Tom Wilson in tom m wilson.

I have been travelling West from Asia.  When I was in Colombo I photographed a golden statue of the Buddha facing the Greco-Roman heritage embodied in Colombo’s Town Hall.  And now I’ve finally reached a real example of the Roman Empire’s built heritage – the city of Jerash in Jordan.  Jerash is one of the […]

We Are Bedu

Published 26 Dec 2016 by Tom Wilson in tom m wilson.

While in Wadi Musa I had met our Bedu guide’s 92 year old mother. She was living in an apartment in the town. I asked her if she preferred life when she was a young woman and there was less access to Western conveniences, or if she preferred life in the town today. She told me […]

Montreal Castle

Published 26 Dec 2016 by Tom Wilson in tom m wilson.

  I’ve been at Montreal (known Arabic as Shawbak) Castle, a crusader castle south of Wadi Musa. Standing behind the battlements I had looked through a slit in the stone.  Some of this stone had been built by Christians from Western Europe around 1115 AD in order to take back the Holy Land from Muslims. Through […]

Petra

Published 26 Dec 2016 by Tom Wilson in tom m wilson.

Mountains entered. Size incalculable. Mystical weight and folds of stone. Still blue air. The first day in Petra we headed out to Little Petra, a few kms away from the more famous site, where a narrow canyon is filled with Nabatean caves, carved around 2000 years ago. On the way we took a dirt track […]

Ghost of blog posts past

Published 25 Dec 2016 by Bron Gondwana in FastMail Blog.

Last year I posted about potential future features in FastMail, and the magic outbox handling support that I had just added to Cyrus. In the spirit of copying last year, I'm doing a Dec 25th post again (with a bit more planning).

During this year's advent I've had more support than previous years, which is great! I didn't have to write as much. One day we might run out of things to say, but today is not that day.

Last year's post definitely shows the risks of making future predictions out loud, because for various reasons we spent a lot of time on other things this year, and didn't get the snooze/"delayed send"/"tell me if no reply" features done.

But the underlying concepts didn't go to waste. We're using magic replication of a hidden folder the "#jmap' for blob uploads now, and we're indexing every single body part by sha1 allowing us to quickly find any blob, from any message, anywhere in a user's entire mailstore.

One day, this could help us to efficiently de-duplicate big attachments and save disk space for users who get the message mailed backwards and forwards a lot.

And features that fall under the general category of "scheduled future actions on messages and conversations" are still very much on our roadmap.

Looking ahead

When we developed our values statement a couple of weeks ago, we spent a lot of time talking about our plans for the next few years, and indeed our plans for the next few months as well!

We also distilled a mission statement: FastMail is the world's best independent email service for humans (explicitly not transactional/analytics/marketing emails), providing a pleasant and easy-to-use interface on top of a rock solid backend. Our other product lines, Pobox (email for life) and Listbox (simple mass email) complement our offering, and next year you'll see another product that builds on the expertise of both teams.

Upgrading the remaining server-side generated screens into the main app is on the cards, and converting all our APIs to the JMAP datamodel. Once we're happy with APIs that we can support long term, we'll be publishing guides to allow third parties to build great things on top of our platform.

And of course we'll continue to react to the changing world that we live in, with a particular focus on making sure all our features work, and work well, on interfaces of all sizes. Our commitment to standards and interoperability is undiminished. We've joined M3AAWG and will be attending our first of their conferences next year, as well as continuing to contribute to CalConnect and getting involved with the IETF. Some of our staff are speaking at Linux Conf Australia in January, see us there!

New digs

We've spent a lot of the past couple of weeks looking at new office space. We're outgrowing our current offices, and since our lease expires next year, it's time to upgrade. We particularly need space because we'll be investing heavily in staffing next year, with a full time designer joining us here in Melbourne. We're also planning to keep improving our support coverage, and adding developers to allow us to have more parallel teams working on different things.

I totally plan to make sure I get the best seat in the house when office allocation comes around!

Technical debt

We moved a lot slower on some things than we had hoped in the past year. The underlying reason is the complexity that grows in a codebase that's been evolving over more than 15 years. Next year we will be taking stock, simplifying sections of that code and automating many of the things we're doing manually right now.

There's always a balance here, and my theory for automating tasks goes something like:

  1. do it once (or multiple times) to make sure you understand the problem
  2. do it a another time, tracking the exact steps that were taken and things that were checked to make sure it was working properly
  3. write the automation logic and run it by hand, watching each step carefully to make sure it's going what you want - as many times as necessary to be comfortable that it's all correct
  4. turn on automation and relax!

For my own part, the Calendar code is where I'm going to spend the bulk of my cleanup work, there are some really clunky bits in there. And I'm sure everyone else has their own area they are embarrassed by. Taking the time to do cleanup weeks where we have all promised not to work on any new features will help us in the long run, it's like a human sleeping and allowing the brain to reset.

What's exciting next year?

Me, I'm most excited about zeroskip, structured db and making Cyrus easier to manage, and I've asked a few other staff to tell me what excites them about 2017:

"Replacing our incoming SMTP and spam checking pipeline with a simpler and easier to extend system." — Rob M

"Can't wait to hang out at LCA (see you there?) where I'm doing my (first ever talk), and meet customers present (and future)! (all of the brackets)" — Nicola

"Making more tools and monitoring and other internal magic so everyone can get stuff done faster without worrying about breaking anything." — Rob N

"The continued exchange of ideas and software between FastMail and Pobox. I think that 2017 will be the year when a lot of our ongoing sharing will begin to bear fruit, and it's going to be fantastic" — Rik

"Focusing on Abuse and Deliverability — making sure your mail gets delivered, and keeping nasties out of your Inbox" — Marc

"Getting our new project in front of customers — it brings the best parts of Listbox's group email infrastructure together with Fastmail's interface expertise. It's going to be awesome!" — Helen


Now That’s What I Call Script-Assisted-Classified Pattern Recognized Music

Published 24 Dec 2016 by Jason Scott in ASCII by Jason Scott.

Merry Christmas; here is over 500 days (12,000 hours) of music on the Internet Archive.

Go choose something to listen to while reading the rest of this. I suggest either something chill or perhaps this truly unique and distinct ambient recording.

 

Let’s be clear. I didn’t upload this music, I certainly didn’t create it, and actually I personally didn’t classify it. Still, 500 Days of music is not to be ignored. I wanted to talk a little bit about how it all ended up being put together in the last 7 days.

One of the nice things about working for a company that stores web history is that I can use it to do archaeology against the company itself. Doing so, I find that the Internet Archive started soliciting “the people” to begin uploading items en masse around 2003. This is before YouTube, and before a lot of other services out there.

I spent some time tracking dates of uploads, and you can see various groups of people gathering interest in the Archive as a file destination in these early 00’s, but a relatively limited set all around.

Part of this is that it was a little bit of a non-intuitive effort to upload to the Archive; as people figured it all out, they started using it, but a lot of other people didn’t. Meanwhile, Youtube and other also-rans come into being and they picked up a lot of the “I just want to put stuff up” crowd.

By 2008, things start to take off for Internet Archive uploads. By 2010, things take off so much that 2008 looks like nothing. And now it’s dozens or hundreds of uploads of multi-media uploads a day through all the Archive’s open collections, not to count others who work with specific collections they’ve been given administration of.

In the case of the general uploads collection of audio, which I’m focusing on in this entry, the number of items is now at over two million.

This is not a sorted, curated, or really majorly analyzed collection, of course. It’s whatever the Internet thought should be somewhere. And what ideas they have!

Quality is variant. Finding things is variant, although the addition of new search facets and previews have made them better over the years.

I decided to do a little experiment: slight machine-assisted “find some stuff” sorting. Let it loose on 2 million items in the hopper, see what happens. The script was called Cratedigger.

Previously, I did an experiment against keywording on texts at the archive – the result was “bored intern” level, which was definitely better than nothing, and in some cases, that bored internet could slam through a 400 page book and determine a useful word cloud in less than a couple seconds. Many collections of items I uploaded have these word clouds now.

It’s a little different with music. I went about it this way with a single question:

Cratediggers is not an end-level collection – it’s a holding bay to do additional work, but it does show the vast majority of people would upload a sound file and almost nothing else. (I’ve not analyzed quality of description metadata in the no-image items – that’ll happen next.) The resulting ratio of items-in-uploads to items-for-cratediggers is pretty striking – less than 150,000 items out of the two million passed this rough sort.

The Bored Audio Intern worked pretty OK. By simply sending a few parameters, The Cratediggers Collection ended up building on itself by the thousands without me personally investing time. I could then focus on more specific secondary scripts that do things and an even more lazy manner, ensuring laziness all the way down.

The next script allowed me to point to an item in the cratediggers collection and say “put everything by this uploader that is in Cratediggers into this other collection”, with “this other collection” being spoken word, sermons, or music. In general, a person who uploaded music that got into Cratediggers generally uploaded other music. (Same with sermons and spoken word.) It worked well enough that as I ran these helper scripts, they did amazingly well. I didn’t have to do much beyond that.

As of this writing, the music collection contains over 400 solid days of Music. They are absolutely genre-busting, ranging from industrial and noise all the way through beautiful Jazz and acapella. There are one-of-a-kind Rock and acoustic albums, and simple field recordings of Live Events.

And, ah yes, the naming of this collection… Some time ago I took the miscellaneous texts and writings and put them into a collection called Folkscanomy.

After trying to come up with the same sort of name for sound, I discovered a very funny thing: you can’t really attached any two words involving sound together and not already have some company that has the name of Manufacturers using it. Trust me.

And that’s how we ended up with Folksoundomy.

What a word!

The main reason for this is I wanted something unique to call this collection of uploads that didn’t imply they were anything other than contributed materials to the Archive. It’s a made-up word, a zesty little portmanteau that is nowhere else on the Internet (yet). And it leaves you open for whatever is in them.

So, about the 500 days of music:

Absolutely, one could point to YouTube and the mass of material being uploaded there as being superior to any collection sitting on the archive. But the problem is that they have their own robot army, which is a tad more evil than my robotic bored interns; you have content scanners that have both false positives and strange decorations, you have ads being put on the front of things randomly, and you have a whole family of other small stabs and Jabs towards an enjoyable experience getting in your way every single time. Internet Archive does not log you, require a login, or demand other handfuls of your soul. So, for cases where people are uploading their own works and simply want them to be shared, I think the choice is superior.

This is all, like I said, an experiment – I’m sure the sorting has put some things in the wrong place, or we’re missing out on some real jewels that didn’t think to make a “cover” or icon to the files. But as a first swipe, I moved 80,000 items around in 3 days, and that’s more than any single person can normally do.

There’s a lot more work to do, but that music collection is absolutely filled with some beautiful things, as is the whole general Folksoundomy collection. Again, none of this is me, or some talent I have – this is the work of tens of thousands of people, contributing to the Archive to make it what it is, and while I think the Wayback Machine has the lion’s share of the Archive’s world image (and deserves it), there’s years of content and creation waiting to be discovered for anyone, or any robot, that takes a look.


My Top Ten Gigs (as a Punter) and Why

Published 24 Dec 2016 by Dave Robertson in Dave Robertson.

I had a dream. In the dream I had a manager. The manager told me I should write a “list” style post, because they were trending in popularity. She mumbled something about the human need for arbitrary structure amongst the chaos of existence. Anyway, these short anecdotes and associated music clips resulted. I think I really did attend these gigs though, and not just in a dream.

10. Dar Williams at Phoenix Concert Theatre, Toronto, Canada – 20 August 2003

You don’t need fancy instrumentation when you’re as charming, funny and smart as Dar Williams. One of her signature tunes, The Christians and the Pagans, seems appropriate to share this evening, given the plot takes place on Christmas Eve.

9. Paul Kelly at Sidetrack Cafe, Edmonton, Canada – 18 March 2004

The memorable thing about this gig was all the Aussies coming out of the woodwork of this icy Prairie oil town, whose thriving music underbelly was a welcome surprise to me. Incidentally, the Sidetrack Cafe is the main location of events in “For a Short Time” by fellow Aussie songwriter Mick Thomas. Tiddas did a sweet cover of this touching song:

8. Hussy Hicks at the Town Hall, Nannup – 5 March 2016

Julz Parker and Leesa Gentz have serious musical chops. Julz shreds on guitar and Leesa somehow manages not to shred her vocal chords despite belting like a beautiful banshee. Most importantly they have infectious fun on stage, and I could have picked any of the gigs I’ve been to, but I’ll go with the sweat anointed floor boards of one of their infamous Nannup Town Hall shows. This video is a good little primer on the duo.

7. The National at Belvoir Amphitheatre, Swan Valley – 14 February 2014

After this gig I couldn’t stop dancing in the paddock with friends and strangers amongst the car headlights. The National are a mighty fine indie rock band, fronted by the baritone voice of Matt Berninger. He is known for downing a bottle of wine on stage, and is open about it being a crutch to deal with nerves and get in the zone. This clip from Glastonbury is far from his best vocal delivery, but its hard to argue that its not exciting and the audience are certainly on his wavelength!

6. Kathleen Edwards at Perth Concert Hall balcony – 17 February 2006

I was introduced to Kathleen Edwards by a girlfriend who covered “Hockey Skates” and I didn’t hesitate to catch her first, and so far only, performance in Perth. The easy banter of this fiery red head, and self proclaimed potty mouth, included warning a boisterous women in the audience that her husband/guitarist, Colin Cripps, was not “on the market”. Change the Sheets is a particularly well produced song of Kathleen’s, engineered by Justin Vernon (aka Bon Iver):

5. The Cure at Perth Arena – 31 July 2016

One of the world’s most epic bands, they swing seamlessly from deliriously happy pop to gut-wrenching rock dirges, all with perfectly layered instrumentation. This was third Cure show and my favourite, partly because I was standing (my preferred way to experience any energetic music) and also great sound that meant I didn’t need my usual ear plugs. Arguably the best Cure years were 85 to 92 when they had Boris Williams on drums, but this was a fine display and at the end of the three hours I wanted them to keep playing for three more. “Lovesong” is my innocent karaoke secret:

4. Lucie Thorne & Hamish Stuart in my backyard – 26 Feburary 2014

I met Lucie Thorne at a basement bar called the Green Room in Vancouver in 2003. She is the master of the understatement, with a warm voice that glides out the side of her mouth, and evocative guitar work cooked just the right amount. Her current style is playing a Guild Starfire through a tremolo pedal into a valve amp, while being accompanied by the tasteful jazz drumming legend Hamish Stuart. Here’s a clip of the house concert in question:

3. Ryan Adams and the Cardinals at Metropolis, Fremantle – 25 January 2009

The first review I read of a Ryan Adam’s album said he could break hearts singing a shopping list, and he’s probably the artist I’ve listened to the most in the last decade. He steals ideas from the greats of folk, country, rock, metal, pop and alt-<insert genre>, but does it so well and so widely, and with such a genuine love and talent for music. I’m glad I caught The Cardinals in their prime and there was a sea of grins flowing out onto the street after the three hour show. This stripped back acoustic version of “Fix It” is one of my favourites:

2. Damien Rice at Civic Hotel – 9 October 2004

I feel Damien Rice’s albums, with the exception of “B-Sides”, are over-produced with too many strings garishly trying to tug your heart strings. Live and solo however, Damien is a rare force with no strings attached or required. I heard a veteran music producer say the only solo live performer he’s seen with a similar power over an audience was Jeff Buckley. I remember turning around once at the Civic Hotel gig and seeing about half the audience in tears, and I was well and truly welling up.

1. Portland Cello Project performing Radiohead’s Ok Computer at Aladin Theatre, Portland, Oregon – 22 September 2012

Well if crying is going to be a measure of how good a gig is then choosing my number one is easy. I cried all the way through the Portland Cello Project’s performance of Ok Computer and wrote a whole separate post about that.

Honourable mentions:

Joe Pug at Hardly Strictly Bluegrass, San Francisco – October 2012.

Yothu Yindi at Curtin University – 1996

Billy Bragg at Enmore Theatre, Sydney – 14 April 1999

Sally Dastey at Mojos – 2004

CR Avery at Marine Club in Vancouver – 28 November 2003

Jill Sobule at Vancouver Folk Festival – July 2003

Let the Cat Out in my lounge room – 2011

Martha Wainwright atFly By Night, Fremantle – 22 November 2008

The Mountain Goats at The Bakery, Perth –  1 May 2012… coming to town again in April – come!

Share


SPF, DKIM & DMARC: email anti-spoofing technology history and future

Published 24 Dec 2016 by Rob Mueller in FastMail Blog.

This is the twenty fourth and final post in the 2016 FastMail Advent Calendar. Thanks for reading, and as always, thanks for using FastMail!


Quick, where did this email come from and who was it sent to?

From: PayPal <service@paypal.com.au>
To: Rob Mueller <robm@fastmail.fm>
Subject: Receipt for your donation to Wikimedia Foundation, Inc.

Actually, these headers tell you nothing at all about where the email really came from or went to. There are two separate parts to the main email standards. RFC5322 (originally RFC822/RFC2822) specifies the format of email messages, including headers like from/to/subject and the body content. However, it doesnʼt specify how messages are transmitted between systems. RFC5321 (originally RFC821/RFC2821) describes the Simple Mail Transfer Protocol (SMTP) which details how messages are sent from one system to another.

The separation of these causes a quirk: the format of a message need not have any relation to the source or destination of a message. That is, the From/To/Cc headers you see in an email may not have any relation to the sender of the message or the actual recipients used during the SMTP sending stage!

When the email standards were developed, the internet was a small network of computers at various universities where people mostly knew each other. The standard was developed with the assumption that the users and other email senders could be trusted.

So, the From header would be a userʼs own email address. When you specified who you wanted to send the message to, those addresses would be put in the To header and used in the underlying SMTP protocol delivering the messages to those people (via the RCPT TO command in SMTP). This separation of email format and transport also allows features like Bcc (blind carbon copy) to work. Any addresses a message is bccʼd to donʼt appear in the message headers, but are used in the underlying SMTP transport to deliver the message to the right destination.

Over time of course, this assumption of a friendly environment became less and less true. We now live in a world where much of the internet is downright hostile. We need to heavily protect our systems from mountains of spam and malicious email, the much of it designed to trick people.

There are many layers of protection from spam, from RBLs to detect known spam sending servers, to content analysis that helps classify messages as spammy or not. In this, we want to talk the major anti-spoofing techniques that have been developed for email.

SPF

One of the earliest anti-spoofing efforts was SPF (Sender Policy Framework). The idea behind SPF was that senders could specify, via an SPF record published in DNS, what servers were allowed to send email for a particular domain. For example, only servers X, Y & Z are allowed to send email for @fastmail.com addresses.

Unfortunately, SPF has many problems. For starters, it only works on the domain in the SMTP sending protocol, known as the MAIL FROM envelope address. No email software ever displays this address. (Its main use is where to send error/bounce emails if final delivery fails.) In a world where what the recipient can be anything, thereʼs no need for the MAIL FROM address to match the From header address in any way. So effectively, the only thing youʼre protecting against is the spoofing of an email address no one ever sees.

In theory, this does help to address one particular type of spam. It helps reduce backscatter email. Backscatter what you see when messages spammers sent pretending to be you can't be delivered.

In practice, it would only do that if people actually blocked email that failed SPF checks at SMTP time. They rarely do that because SPF has a major problem. It completely breaks traditional email forwarding. When a system forwards an email, itʼs supposed to preserve the MAIL FROM address so any final delivery failures go back to the original sender. Unfortunately, that means when someone sends from Hotmail to FastMail, and then you forward on from FastMail to Gmail, in the FastMail to Gmail hop, there's a mismatch. The MAIL FROM address will be an @hotmail.com domain, but the SPF record will say that FastMail isnʼt allowed to send email with an @hotmail.com domain address!

There was an attempt to fix this (SRS), but itʼs surprisingly complex. Given the relatively low value of protection SPF provides, not many places ended up implementing SRS. The situation we ended up with is that SPF is regarded as a small signal for email providers' use. If SPF passes, itʼs likely the email is legitimately from the domain in the MAIL FROM address. If it fails, well... thatʼs not really much information at all. It could be a forwarded email, it could be a misconfigured SPF record, or many other things. But stay tuned for its next life in DMARC.

DKIM

DKIM (DomainKeys Identified Mail) is a significantly more complex and interesting standard compared to SPF. It allows a particular domain owner (again, via a record published in DNS) to cryptographically sign parts of a message so that a receiver can validate that they havenʼt been altered.

DKIM is a bit fiddly at the edges and took a while to get traction, but is now commonly used. Almost 80% of email delivered to FastMail is DKIM signed.

So letʼs take the message we started with at the top and add a DKIM signature to it.

DKIM-Signature: v=1; a=rsa-sha256; d=paypal.com.au; s=pp-dkim1; c=relaxed/relaxed;
    q=dns/txt; i=@paypal.com.au; t=1480474251;
    h=From:From:Subject:Date:To:MIME-Version:Content-Type;
    bh=Vn79RZZBrNIu4HFwGMOOAezyw/2Ag+w+avW1yscPcUw=;
    b=...
From: PayPal <service@paypal.com.au>
To: Rob Mueller <robm@fastmail.fm>
Subject: Receipt for your donation to Wikimedia Foundation, Inc.

Using a combination of public key cryptography and DNS lookups, the receiver of this email can determine that the domain "paypal.com.au" signed the body content of this email and a number of the email's headers (in this case, From, Subject, Date, To and a couple of others.) If it validates, we know the body content and specified headers have not been modified by anyone along the way.

While this is quite useful, there are still big questions that arenʼt answered.

What about emails with a From address of @paypal.com.au that arenʼt DKIM signed by paypal.com.au? Maybe not every department within PayPal has DKIM signing correctly set up. Should we treat unsigned emails as suspicious or not?

Also, how do I know if I should trust the domain that signs the email? In this case, paypal.com.au is probably owned by the Australian division of PayPal Holdings, Inc, but what about paypal-admin.com? Itʼs not obvious what domains I should or shouldnʼt trust. In this case, the From address matches the DKIM signing domain, but that doesnʼt need to be the case. You can DKIM sign with any domain you want. Thereʼs nothing stopping a scammer using an @paypal.com.au address in the From header, but signing with the paypal-admin.com domain.

Despite this, DKIM provides real value. It allows an email receiver to associate a domain (or multiple, since multiple DKIM signatures on an email are possible and in some cases useful) with each signed email. Over time, the receiver can build up a trust metric for that domain and/or associated IPs, From addresses, and other email features. This helps discriminate between "trusted" emails and "untrusted" emails.

DMARC

DMARC (Domain-based Message Authentication, Reporting & Conformance) attempts to fix part of this final trust problem by building on DKIM and SPF. Again, by publishing a record in DNS, domain owners can specify what email receivers should do with email received from their domain. In the case of DMARC, we consider email to be from a particular domain by looking at the domain in the From header: -- the address you see when you receive a message..

In its basic form, when you publish a DMARC record for your domain receivers should:

  1. Check the From header domain matches the DKIM signing domain (this is called alignment), and that the DKIM signature is valid.

  2. Check the the From header domain matches the SMTP MAIL FROM domain, and that the senderʼs IP address is validated by SPF.

If either is true, the email "passes" DMARC. If both fail, the DMARC DNS record specifies what the receiver should do, which can include quarantining the email (sending it to your spam folder) or rejecting the email. Additionally, the DMARC record can specify an email address to send failure reports to. DMARC also allows senders to specify which percentage of their mail to apply DMARC to, so they can make changes in a gradual and controlled way.

So back to our example email:

DKIM-Signature: v=1; a=rsa-sha256; d=paypal.com.au; s=pp-dkim1; c=relaxed/relaxed;
    q=dns/txt; i=@paypal.com.au; t=1480474251;
    h=From:From:Subject:Date:To:MIME-Version:Content-Type;
    bh=Vn79RZZBrNIu4HFwGMOOAezyw/2Ag+w+avW1yscPcUw=;
    b=...
From: PayPal <service@paypal.com.au>
To: Rob Mueller <robm@fastmail.fm>
Subject: Receipt for your donation to Wikimedia Foundation, Inc.

In this case, the From header domain is paypal.com.au. Letʼs check if they publish a DMARC policy.

$ dig +short _dmarc.paypal.com.au TXT
"v=DMARC1; p=reject; rua=mailto:d@rua.agari.com; ruf=mailto:dk@bounce.paypal.com,mailto:d@ruf.agari.com"

Yes, they do. letʼs run our checks! Does the From domain match the DKIM signing domain paypal.com.au? Yes, so we have alignment. If the email wasnʼt DKIM signed, or if it were DKIM signed but the domain had been paypal-admin.com (e.g. signed by a scammer), then there wouldnʼt have been alignment, and so DMARC would have failed. At that point, we would have consulted the DMARC policy, which specifies p=reject, which says that we should just reject the forged email.

In this case (I havenʼt included the entire DKIM signature, but I can tell you it validated), the email did pass DMARC. So we can accept it. Because of alignment, we know the domain in the From address also matches the DKIM signing domain. This allows users to be sure that when they see a From: @paypal.com.au address, they know that itʼs a real message from paypal.com.au, not a forged one!

This is why DMARC is considered an anti-phishing feature. It finally means that the domain in the From address of an email canʼt be forged (at least for domains that DKIM sign their emails and publish a DMARC policy). All that, just to ensure the domain in the From address canʼt be forged, in some cases.

Unfortunately, as is often the case, this feature also brings some problems.

DMARC allows you to use SPF or DKIM to verify a message. If you donʼt DKIM sign a message and rely only on SPF, when a message is forwarded from one provider to another, DMARC will fail. If you have a p=reject policy setup, the forwarding will fail. Unlike in SPF where failure is a "weak signal", a DMARC policy is supposed to tell receivers more strictly what to do, making bounces a strong possibility.

The solution: always make sure you DKIM sign mail if you have a DMARC policy. If your email is forwarded, SPF will break, but DKIM signatures should survive. SRS wonʼt help with DMARC, because replacing the MAIL FROM envelope with your own domain means the MAIL FROM domain doesnʼt match the From header domain. This is an alignment failure, and so not a pass result for DMARC.

I say "should survive", because, again, not all providers are great at that. In theory, forwarding systems preserve the complete structure of your message. Unfortunately, thatʼs not always the case. Even large providers have problems with forwarding inadvertently slightly altering the content/structure of an email (Exchange based systems (including outlook.com) and iCloud are notorious for this). Even a slight modification can and will break DKIM signatures. Again, combined with a DMARC p=reject policy, this can result in email being rejected.

The solutions in this case are to:

  1. Bug those providers to fix their email forwarding and not to modify the email in transit. DKIM is now a well established standard; providers should ensure their forwarding doesnʼt break DKIM signatures.

  2. Switch to using POP to pull email from your remote provider. We donʼt do SPF/DKIM/DMARC checking on emails pulled from a remote mailbox via POP.

  3. Donʼt forward this mail. Wherever the emails are coming from, change your email address at that service provider so it points directly to your FastMail email address and avoids forwarding altogether.

Thereʼs one other case thatʼs a known big issue with DMARC: mailing lists. Mailing lists can be considered a special case of email forwarding: you send to one address, and itʼs forwarded to many other addresses (the mailing list members). However, itʼs traditional for mailing lists to modify the emails during forwarding, by adding unsubscribe links or standard signatures to the bottom of every message and/or adding a [list-id] tag to the message subject.

DKIM signing subjects and message bodies is very common. Changing them breaks the DKIM signature. So, if the sender's domain has a p=reject DMARC policy, then when the mailing list software attempts to forward the message to all the mailing list members, the receiving systems will see a broken DKIM signature and thus reject the email. (This was actually a significant problem when Yahoo and AOL both enabled p=reject on their user webmail service domains a few years ago!)

Fortunately, thereʼs a relatively straight forward solution to this. Mailing list software can rewrite the From address to one the mailing list controls, and re-sign the message with DKIM for that domain. This and a couple of other solutions are explained on the DMARC information website. These days, the majority of mailing list software systems have implemented one of these changes, and those that havenʼt will very likely have to when Gmail enables p=reject on gmail.com sometime early next year. Not being able to forward emails from the worldʼs largest email provider will definitely hamper your mailing list.

These authentication systems affect FastMail in two ways. What we do for email received from other senders, and what we do when sending email.

SPF, DKIM & DMARC for email received at FastMail

Currently, FastMail does SPF, DKIM and DMARC checking on all incoming email received over SMTP (but not email retrieved from remote POP servers).

Passing or failing SPF and/or DKIM validation only adjusts a message's spam score. We donʼt want to discriminate against a failing DKIM signature for an important domain, and we donʼt want to whitelist a spammy domain with a valid DKIM signature. A DKIM signature is treated as context information for an email, not a strong whitelist/blacklist signal on its own.

For DMARC, the domain owners are making a strong statement about what they want done with email from their domains. For domains with a p=quarantine policy, we give failing emails a high spam score to ensure they go to the userʼs Spam folder. For domains with a p=reject policy, we donʼt currently reject at SMTP time but effectively still do a quarantine action with an even higher score. We hope to change this in the future after adding some particular exceptions known to cause problems.

We add a standard Authentication-Results header to all received emails explaining the results of SPF, DKIM and DMARC policies applied. Surprisingly, existing software to do this was not well maintained or buggy, so we ended up writing an open source solution we hope others will use.

Back to our example again. Hereʼs that PayPal email with the corresponding Authentication-Results header.

Authentication-Results: mx2.messagingengine.com;
    dkim=pass (2048-bit rsa key) header.d=paypal.com.au header.i=@paypal.com.au header.b=PVkLotf/;
    dmarc=pass header.from=paypal.com.au;
    spf=pass smtp.mailfrom=service@paypal.com.au smtp.helo=mx2.slc.paypal.com
DKIM-Signature: v=1; a=rsa-sha256; d=paypal.com.au; s=pp-dkim1; c=relaxed/relaxed;
    q=dns/txt; i=@paypal.com.au; t=1480474251;
    h=From:From:Subject:Date:To:MIME-Version:Content-Type;
    bh=Vn79RZZBrNIu4HFwGMOOAezyw/2Ag+w+avW1yscPcUw=;
    b=...
From: PayPal <service@paypal.com.au>
To: Rob Mueller <robm@fastmail.fm>
Subject: Receipt for your donation to Wikimedia Foundation, Inc.

You can see SPF, DKIM, and DMARC all passed.

The information in this header is used by other parts of the FastMail system. For instance, if youʼve added service@paypal.com.au to your address book to whitelist it, weʼll ignore the whitelisting if DMARC validation fails. This ensures that a scammer canʼt create an email with a forged From address of service@paypal.com.au and get it into your Inbox because youʼve whitelisted that From address.

SPF, DKIM & DMARC for FastMail and user domains

All FastMail domains currently have a relaxed SPF policy (by design because of legacy systems, see DMARC below) and we DKIM sign all sent email. We actually sign with two domains, the domain in the From header, as well as our messagingengine.com domain. This is to do with some Feedback Loops, which use the DKIM signing domain to determine the source of the message.

For user domains, weʼll also publish a relaxed SPF policy and a DKIM record if you use us to host the DNS for your domain. If you use another DNS provider, you need to make sure you copy and publish the correct DKIM record at your DNS provider. Once we detect itʼs setup, weʼll start DKIM signing email you send through us.

Currently, FastMail doesnʼt have a DMARC policy for any of our domains, and we donʼt publish a default policy for user domains either. This means that users can send emails with @fastmail.com From addresses from anywhere. This is a bit of a legacy situation. When FastMail started more than 16 years ago, none of these standards existed. It was common for people to set up all sorts of convoluted ways of sending email with the assumption they could send with any From address they wanted. (Old internet connected fax/scanner machines are a particularly notorious example of this.)

Over time, this is becoming less and less true, and more and more people are expecting that emails will be DKIM signed and/or have valid SPF and/or have a DMARC policy set for the domain. Itʼs likely sometime in the future weʼll also enable a p=reject policy for our domains. To send with an @fastmail.com/@sent.com/etc From address, youʼll have to send through our servers. This is perfectly possible with authenticated SMTP, something basically everything supports these days.

Ongoing problems

Even though DMARC allows us to verify that the domain in the From header actually sent and authenticated the email and its contents, a great anti-phishing feature, itʼs still a long way from stopping phishing. As we personally experienced, people donʼt check their emails with a skeptical eye. We regularly saw phishing emails sent to FastMail users like:

From: No Reply <joeblogs@completelyrandomsite.com>
To: foobar@fastmail.com
Subject: Urgent! Your account is going to be closed!

Click [here](http://example.com) right now or your account will be closed

Enough people clicked on it, and filled in the login form on a bogus website (that didnʼt even look much like FastMail), that weʼd see multiple stolen accounts daily. Unfortunately, trying to educate users just doesnʼt seem to work.

One of the main advantages of email is that itʼs a truly open messaging system. Anyone in the world can set up an email system and communicate with any other email system in the world. Itʼs not a walled garden controlled by a single company or website. This openness is also its biggest weakness, since it means legitimate senders and spammers/scammers are on an equal footing. This means that email will continue its evolutionary arms race between spammers/scammers and receivers into the future, trying to determine if each email is legitimate using more and more factors. Unfortunately this means there will always be false positives (emails marked as spam that shouldnʼt be) and false negatives (spam/scam emails that make it through to a persons inbox). Thereʼs never going to be a "perfect" email world, regardless of what systems are put in place, but we can keep trying to get better and better.

Email authentication in the future

Although the main problem of mailing lists' incompatibility with DMARC p=reject policies has mostly been solved, it creates another problem in that receivers have to use the trust of the mailing list provider domain. This provides an incentive for spammers to target mailing lists, hoping for laxer spam checking controls that will forward the email to final receiving systems that will trust the mailing list provider. An emerging standard called ARC attempts to let receivers peer back to previous receiving servers in a trusted way so they can see authentication results and associated domains from earlier stages of a multi-step delivery path.

One thing we would like to see is some way to associate a domain with some real world. One way would be to piggy back on the SSL Extended Validation (EV) Certificate system. Obtaining an EV certificate requires proof of the requesting entity's legal identity. You see this in web browsers when you navigate to sites that use an EV certificate. For instance our site uses an EV certificate (https://www.fastmail.com) and browsers will show "FastMail Pty Ltd" in the address bar. Being able to display a clear "PayPal, Inc" next to emails legitimately from PayPal or any other financial institution would seem to be a significant win for users (modulo the slightly sad results we already found regarding users falling for phishing emails).

Unfortunately, there's no standard for this now and nothing on the horizon, and it's not entirely obvious how to do this without support from the senders. A naive approach that doesn't require sender changes would be to extract the domain from a From header address and attempt to make an https:// connection to it. But there's all sorts of edge cases. For instance, PayPal uses country specific domains for DKIM signing (e.g. paypal.com.au), but if you go to http://paypal.com.au in a web browser, it redirects to https://www.paypal.com. You can't just follow any redirect, because a scammer could setup paypal-scam.com and redirect to https://www.paypal.com. Working out what redirects should actually be followed is entirely non-trivial.

Coda

This post has turned out significantly longer than I originally anticipated, but it shows just how complex a subject email authentication is in a modern context. In many cases, FastMail tries hard to make these things "just work", both as a receiver from other systems, and if you're a customer as a sender. If you host DNS for your domain with us, we setup an SPF and DKIM signing records automatically for you. We don't currently setup a DMARC record (there are still too many different ways people send email), but we hope in the future to allow easier construction and rollout of DMARC records for user domains.


PGP tools with FastMail

Published 23 Dec 2016 by Nicola Nye in FastMail Blog.

This is the twenty third and penultimate post in the 2016 FastMail Advent Calendar. Stay tuned for the final post tomorrow.


Earlier in our advent series, we looked at why FastMail doesn't have PGP support, and we mentioned that the best way to use PGP was with a command line or IMAP client.

So, as promised, a quick guide to (some of) the open source options PGP clients available for use with your FastMail account! We definitely recommend that you use Open Source encryption software, and preferably reproducible builds.

Not sure how encryption like PGP works? This is a basic overview of encryption which leads into an understanding of PGP to encrypt email. If you plan on taking your privacy seriously, we recommend further reading to understand the risks, rewards and usability issues associated with using encryption software.

While we have done some basic research on these options, we can't provide any guarantees as to their suitability for your particular situation.

Browser plugins

Mailvelope

WebPG

Native clients

MacOS

GPGTools

iOS (iPhone/iPad)

There are no open source applications available for iOS, but these apps are available (and claim to be built on open source implementations of Open PGP) if you are looking for options.

iPGMail

PGPEverywhere

Windows/Linux

Install a mail client compatible with a set of plugins to enable PGP.

Plugins:

Android

OpenKeychain

Command Line

We recommend GNU Privacy Guard on the command line. It is available as source code and binaries for many platforms.

Chris our intrepid tester, has set up gpg on his work laptop to allow him to securely transfer data to his home machine, without ever bringing a copy of the private key to work. He's given the following set of steps which include easy-to-use aliases:

Generate a key with: gpg --gen-key (it asks some questions)

Export the public key with: gpg --armor --output ~/example.fm.key --export chris@example.fm. It will look like this:

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1

mQENBFhbajUBCADO7Rp0dVuVb2JWv86zvnqUC32NuarYXIeCpesvqIxU8wqr7hh5
R4IwZyVEcBYTyVaMWVhjGmxGCBhvauKb8ZivRwuUw0bHwVKCfjI+uWjB27lVwRLE
9zxNa2NA8svzY8EgImo48KO2/YA4Rw9ozMLxM/KkmRnmnoo5oDk1jXe7I0ILOPb1
6pQVDT/PJRrb+QXc7AMCD/Jj61PgFnPBGqLvICTTKwoeIE8dfFu7l0hwOTSloDv7
KiiM4+Xwz2Lptt7eJAlpKImCzeH96/yPK4IfkAId2IJCC5GfChG2aovNFBrhPsMv
9jWVNDvFvoLTyYqM5V2slaa/U6qTkWiyV3tpABEBAAG0NUNocmlzIEV4YW1wbGUg
KFRoaXMgaXMgYSBkZW1vIGtleSkgPGNocmlzQGV4YW1wbGUuZm0+iQE4BBMBAgAi
BQJYW2o1AhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRAZ87lBaIIEbq/d
B/0Z2K9gF+e65B9da2Gim0bpEH8MNmHJlkcHgIxheMyQ7s8ClrqRRnGZRFEBw55F
x7VGphBjav8H2czp3LKE5OrAyoT1z+kCmXZff7iHII2YIK0zzU3DsyUTpM2AOZKL
fQ5d9nZMJY7Jg7BbDM+N1SJsw6+nRuyG0FnZfF57Qk+h1p2rZn/jadno/XeargeZ
I2TI7GhkBg4ujB0k6Cpvr8gq394TohDcCPEYUBDI5m/5FkyHkUFO8SUGQu0fJ9/t
xX1J91z9EW0xbcjIsmg7TtIpbM3UocoSz++svSjYz2mU546gz/76688nSrYKJ+Os
IZ+wmJdOBc3Du28QbffXGRahuQENBFhbajUBCAC+rldeh9LKxoRblhUfaCxttOQ8
PeqSlNC5IvPikTnjWtkThbVCsM3OYITh18Q66WSSK+2AkWlHdSH6HdaA6zNP7wqZ
iAWf7LP1maLg/a/e8zbC3rTL5LXrtkln0IIje6aXtyq4bLGDuLQEJBo7eBqetr27
Cb5pCatDBkOmxpQxzFQmnYfCMyC8Dm6Z2GIrLj6u5Zb0GIrNoBqhPFxD1MRsromU
ERwWYNQjKodEllv/DMt3yAn2CQRlABxPFem16cDqFEGD/UhcJQrvpVrMbHpANWqh
nLgcBPLhXcmJ5Zd0JhtkwapZ/mLqZkTWWmGGQRdE9RlbKoYsT1yNVeX0RntRABEB
AAGJAR8EGAECAAkFAlhbajUCGwwACgkQGfO5QWiCBG5MYwf/ff7WBragmfCXOaTz
LjERK1nScXzlTZ5ZeEUQTcoujbQvSuFBTw0XtiKWNN3imGhmhorjmQyMFjCmIys2
YCur+c3Jmh6BO8q0xRJwS0jxtNjkObSx2+ICBi6gTTkrBb3ya6Uy2k4BhVfQArlv
5UZeMcxZB8Gh8S0pC4S9s4dTBn1+i4aKSJSGITleDtSj4ZfrZ2JI/mMaJSpk1BKg
JtTb9s+AcWpurV5HW5HCb8PKQsLndPJH5cH0xqIjW8Ha6dbsXmlCfpTNaAoOkQDC
rqWyA3a+f4o/kgq/0cOlJHponcxWmbvTXIBwMtR0O91E5pqp4/no9SmSWefLd0yM
zbOJdA==
=K7Ec
-----END PGP PUBLIC KEY BLOCK-----

On another machine, eg your work machine (somewhere you want to send messages/files from), import the key with: gpg --import-key ~/example.fm.key. Then add the following to your ~/.bashrc file:

alias ToChris='gpg --encrypt --recipient="chris@example.fm" --armour'
alias scramble="gpg --output encrypted.gpg --encrypt --recipient chris@example.fm "

As the work machine does not have your password or private key, you can then create encrypted messages/files from the command line:

echo "This is a private message. Remember to feed Ninja" | ToChris
-----BEGIN PGP MESSAGE-----
Version: GnuPG v1

hQEMA9bIhjnGSgZUAQf+N6nr/t0uGi8HRYAhaxNteWgWR0uwkDPvec6tjHj0gk50
wtBGm1agVAIRWBg5e6w2wkfk2RqQ+ecqPCV4SpgBxdFkcEhsbYOSd81hS2jtQZtH
EUjHK/s0ANqeN8L5a9j6NynwRYjrnFpGWKsSA+Ubd4xUb2vIktXi+BnwNsXdfRw9
A27LZch69w2pr4zHjAyZO/PIq/SEuQ8Xu/+xhR+bq7gHBGOo9sokOle7yTDXnNdR
VsTJaFev4K3didFsNPQWENC6dQ3gHds8qMYGMR4Nt5hIfIfrulyQItjYi/z5LGBq
i6f7y2jSB27wUaGr4EY6vZMyjHpoIlSK0eq4h9bvRNJrAQhcLoEzDxD83oECGXTD
8KIEc78TYlIPgPyGZ3O7GanBxg9tX0UWnjZ8ohk+QStgDiZdivkGOUL1UfByQE7B
qwvgjYrTzu9JJll9LUDjTR0ow4OLaJdIIdPq7uRoBEyhX23mfZIFAruoc3w=
=mjJW
-----END PGP MESSAGE-----

You can then send the GPG message without others being able to read it (e.g. by copying and pasting that text directly into the FastMail web interface as the body of an email)

The command: scramble <filename> will create encrypted.gpg which can be attached to an email.

Key parties

There's are plenty of good resources for how to prepare for a key signing party online. Parties are often associated with conferences, allowing you to build a web of trust with other people in your field. Just make sure you know which kind of key party you're attending.

Example GPG bootstrapping

Bron shows the full process of creating a brand new key to replace his expired key, and signing a document with it.

brong@wot:~$ gpg --list-keys brong@fastmail.fm
pub   rsa2048 2015-09-20 [SC] [expired: 2016-09-19]
      0FBAC288980E770A5A789BA1410D67927CA469F8
uid           [ expired] Bron Gondwana <brong@fastmail.fm>

Shows how long since I've last needed to sign something!

brong@wot:~$ gpg --gen-key
gpg (GnuPG) 2.1.15; Copyright (C) 2016 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

gpg: starting migration from earlier GnuPG versions
gpg: porting secret keys from '/home/brong/.gnupg/secring.gpg' to gpg-agent
gpg: key 410D67927CA469F8: secret key imported
gpg: migration succeeded
Note: Use "gpg --full-gen-key" for a full featured key generation dialog.

GnuPG needs to construct a user ID to identify your key.

Real name: Bron Gondwana
Email address: brong@fastmail.fm
You selected this USER-ID:
    "Bron Gondwana <brong@fastmail.fm>"

Change (N)ame, (E)mail, or (O)kay/(Q)uit? O
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

At this point it popped up a dialog asking me to choose a passphrase.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key D92B20BCF922A993 marked as ultimately trusted
gpg: directory '/home/brong/.gnupg/openpgp-revocs.d' created
gpg: revocation certificate stored as '/home/brong/.gnupg/openpgp-revocs.d/8D8DEE2A5F30EF2E617BB2BBD92B20BCF922A993.rev'
public and secret key created and signed.

pub   rsa2048 2016-12-22 [SC]
      8D8DEE2A5F30EF2E617BB2BBD92B20BCF922A993
uid                      Bron Gondwana <brong@fastmail.fm>
sub   rsa2048 2016-12-22 [E]

brong@wot:~$

Now I have a new key. Let's pop that on the keyservers:

brong@wot:~$ gpg --send-keys 8D8DEE2A5F30EF2E617BB2BBD92B20BCF922A993
gpg: sending key D92B20BCF922A993 to hkp://keys.gnupg.net
brong@wot:~$
brong@wot:~$ echo "So you can all encrypt things to me now, and verify my signature (assuming you trust a fingerprint from a blog)" | gpg --clearsign
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

So you can all encrypt things to me now, and verify my signature (assuming you trust a fingerprint from a blog)
-----BEGIN PGP SIGNATURE-----

iQEcBAEBCAAGBQJYW7nPAAoJENkrILz5IqmT554IAL6cg6+ILkrKeLQlzDtA7pZ9
IluYJCt+HpvGw4wXnOmxLyWa/PkWvHUwAAQ9GpgZq7ZB8Sv4HPkm4sRz3zRvcsfR
gpfp5YmYk/i8Oj482jYp1lsngTCEeHkLNWrvXZyoiVUzWbfhYOzrkIDRwgNUCXuF
i/pgYT4K36d6OdfKbI4jsC62sJT20H8qjO9/I5o0gcmb+axv/kSuO87jvGySMXT5
EAYtogDd+jCL1FB0iyu01oUUoTRqgayMUWChJeofVZ9sehqyhXNoYNp4+/+jusmG
nblWeEYZ2S9d5jBNcHgd5cWQDwlBCJKnx1O8Qj9VO+hkBJBB7wHMAIyei8VsIbM=
=QT0N
-----END PGP SIGNATURE-----

And you can tell that I wrote this and none of my colleagues can edit that text and put words in my mouth (unless they create a different key with my email address and falsify the key generation part of the blog post as well!)

The command line is the most secure way to use PGP, where your email software and your encryption software running as entirely separate processes, only ciphertext or signed cleartext is transferred into the emails which are sent out from your secure computer.


texvc back in Debian

Published 23 Dec 2016 by legoktm in The Lego Mirror.

Today texvc was re-accepted for inclusion into Debian. texvc is a TeX validator and converter than can be used with the Math extension to generate PNGs of math equations. It had been removed from Jessie when MediaWiki itself was removed. However, a texvc package is still useful for those who aren't using the MediaWiki Debian package, since it requires OCaml to build from source, which can be pretty difficult.

Pending no other issues, texvc will be included in Debian Stretch. I am also working on having it included in jessie-backports for users still on Jessie.

And as always, thanks to Moritz for reviewing and sponsoring the package!


MediaWiki not creating a log file and cannot access the database

Published 22 Dec 2016 by sealonging314 in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I'm trying to set up MediaWiki on an Apache2 server. Currently, when I navigate to the directory where the wiki is stored in my web browser, I see the contents of LocalSettings.php dumped on the screen, as well as this error message:

Sorry! This site is experiencing technical difficulties.

Try waiting a few minutes and reloading.

(Cannot access the database)

I have double-checked the database name, username, and password in LocalSettings.php, and I am able to log in using these credentials on the web server. I am using a mysql database.

I have been trying to set up a debug log so that I can see a more detailed error message. Here's what I've added to my LocalSettings.php:

$wgDebugLogFile = "/var/log/mediawiki/debug-{$wgDBname}.log";

The directory /var/log/mediawiki has 777 permissions, but no log file is even created. I've tried restarting the Apache server, which doesn't help.

Why is MediaWiki not creating a debug log? Are there other logs that I should be looking at for more detailed error messages? What could the reason be for the error message that I'm getting?


Want to Close Your Plugins? Email!

Published 22 Dec 2016 by Ipstenu (Mika Epstein) in Make WordPress Plugins.

Hi everyone, it’s winter at last, and there’s snow in the mountains! This is the perfect time to sit by the fire and look at your plugins and get rid of the ones you don’t want to be on the hook for any more.

Did you make a plugin for an event that happened a long time ago, like the 2008 Olympics? Did you make a featured plugin that got wrapped into core and you’re done?

Email plugins@wordpress.org with a link to the plugin and we’ll close it for you!

Doing this means you won’t get any new people complaining about how the plugin doesn’t work and disables itself in WP 4.3 and up (even though you documented it…). It’s less work for you and it’s okay to EoL plugins. We’ll close ’em for you and you’ll be done.

A lovely winter present for everyone.

(If you think the plugin has a use and life, but you don’t want to support it anymore, consider adding the tag ‘adopt me’ to your readme. Just update your readme file with that and maybe someone will come and offer a new home for your old plugin. Check out https://wordpress.org/plugins/tags/adopt-me to see the plugins out there looking for you!)

#reminder


Cyrus development and release plans

Published 22 Dec 2016 by Bron Gondwana in FastMail Blog.

This is the twenty second post in the 2016 Fastmail Advent Calendar. Stay tuned for another post tomorrow.


Cyrus IMAPd development

As we mentioned earlier in this series FastMail is a major contributor to the Cyrus IMAPd project. As the current project lead, it falls to me to write about where we're at, and where we're going.

Since last year's post about the Cyrus Foundation Cyrus development has both slowed and sped up, depending what you're looking at. We haven't advanced the Object Storage work because nobody was sponsoring it any more. Ken from CMU makes it to our weekly meeting, but his availability to work on the open source code depends on how busy he is with other responsibilities.

So for now at least, Cyrus is mostly a FastMail show, and obviously anything that FastMail needs for our own production system takes priority for our staff, and that's where our development resources go.

Still, there's been a ton of work. Looking at commits, well over 10% of the total changes ever happened this year:

brong@bat:~/src/cyrus-imapd$ git log --oneline --since 2016-01-01 | wc -l
1683
brong@bat:~/src/cyrus-imapd$ git log --oneline | wc -l
14327

Looking at the code changes, there's a ton of churn too:

brong@bat:~/src/cyrus-imapd$ git diff 4cc2d21 | diffstat | tail -1
 876 files changed, 107374 insertions(+), 97808 deletions(-)

Which includes some really exciting things like redesiging the entire mbname_t structure to allow converting names between internal and external names really reliably and manipulation of mailboxes without any dotchar or hierarchy separator issues, which removes the cause of a ton of bugs with different configurations in the past.

In terms of new features, there is a full backup/restore system built on top of the replication protocol. There's a fairly complete JMAP implementation. There's much better fuzzy search support, built on the Xapian engine.

A large focus of our development this year has been making things simpler and more robust with APIs that hide complexity and manage memory more neatly, and this will continue with a lot more work on the message_t type next year. So there's been plenty of improvement, not all of it visible in the headline feature department.

And it's not just code. We've moved all our issue tracking to github and Nicola unified our documentation into the source code repositories, making it easier to contribute pull requests for docs.

Test all the things

As Chris mentioned in his post about our test infrastructure we've been increasing our test coverage and making sure that tests pass reliably. I'm particularly proud of the integration of ImapTest into our Cassandane test suite, and the fact that we now pass 100% of the tests (once I fixed a couple of bugs in ImapTest! The RFCs are unclear enough that Timo got it wrong, and he's really reliable.) I also added support for CalDAVTester into Cassandane at CalConnect in Hong Kong this year.

Robert has added a ton of tests for all his JMAP, charset and Xapian work.

Our test coverage is still pretty poor by modern development standards, but for a >20 year old project, it's not too shabby, and I'm really glad for Greg's work back when he was at FastMail, and for the ongoing efforts of all the team to keep our tests up to date. It makes our software much better.

In particular, it makes me a lot more comfortable releasing new Cyrus updates to FastMail's users, because for any bug report, the first thing I do now is add a test to Cassandane, so our coverage improves over time.

Going down the FastMail path

To build the 2.5 release, I sat in a pub in Pittsburgh with a giant printout of the 1000+ commits on the FastMail branch and selected which commits should go upstream and which were not really ready yet. The result was a piece of software which was not exactly what anyone had been running, and it kind of shows in some of the issues that have come out with 2.5. The DAV support was still experimental, and most of the new code had never been used outside FastMail.

After releasing 2.5, we looked at what was left of the FastMail specific stuff, and decided that the best bet was to just import it all into the upstream release, then revert the few things that were really single-provider specific and re-apply them as the fastmail branch. To this day, we have only between 10 and 50 small changes away from master in FastMail production on a day-to-day basis, meaning that everything we offer as the open source version has had real world usage.

So this means that things like conversations, Xapian FUZZY search (requires a custom patched version of Xapian for now, though we're working on upstreaming our patches), JMAP (experimental support) and the backup protocol are all in the 3.0 betas. Plenty of that is non-standard IMAP, though we match standard IMAP where possible.

Version 3.0

There is both less and more than we expected in what will become version 3.0. The main reason for a new major version is that some defaults have changed. altnamespace and unixhierarchysep are now default on, to match the behaviour of most other IMAP servers in the world. We've also got a brand new unicode subsystem based on ICU, a close to finished JMAP implementation, Xapian fuzzy search, the new Backup system and of course a rock solid CalDAV/CardDAV server thanks to Ken's excellent work.

Ellie released Cyrus IMAPd 3.0beta6 yesterday, and our plan is to do another beta at the start of January, then a release candidate on January 13th and a full release in February, absent showstoppers.

Plans for next year

Once 3.0 is out, we'll be continuing to develop JMAP, supporting 2.5 and 3.0, and doing more tidying up.

As I mentioned in the Twoskip post, there are too many different database formats internally, and the locking and rollback on error across multiple databases is a mess. I plan to change everything to just one database per user plus one per host, plus spool and cache files. The database engine will have to be totally robust. I'm working on a new design called zeroskip which is going to be amazing, as soon as I have time for it.

I also plan to add back-pointers to twoskip (it requires changes to two record types and a version bump) which will allow MVCC-style lockless reads, even after a crash, and mean less contention for locks in everything. It's all very exciting.

We're heavily involved in standards, with JMAP in the process of being chartered with the IETF for standards track, and our work through CalConnect on a JSON API for calendar events. Cyrus will continue to implement standards and test suites.

The core team is Ken at CMU, Robert S consulting along with the FastMail staff: myself, Ellie, Nicola and Chris. Of these people, Ellie and Robert are focused entirely on Cyrus, and the rest of us share our duties. It's been fantastic having those two who can single-mindedly focus on the project.

There's plenty of space for more contributors in our team! Join us on Freenode #cyrus IRC or on the mailing lists and help us decide the direction of Cyrus for the future. The roadmap is largely driven by what FastMail wants because we're paying for the bulk of the work that's being done, but we're also willing to invest our time in the community, supporting other users and building a well-rounded product, we just have to know what you need!


What we talk about when we talk about push

Published 21 Dec 2016 by Rob N ★ in FastMail Blog.

This is the twenty-first post in the 2016 FastMail Advent Calendar. Stay tuned for another post tomorrow.


Something people ask us fairly often when considering signing up with us is "do you support push?". The simple answer is "yes", but there's some confusion around what people mean when they ask that question, which makes the answer a bit complicated.

When talking about email, most people usually understand "push" to mean that they get near-realtime notifications on a mobile device when new mail arrives. While this seems like a fairly simple concept, making it work depends on some careful coordination between the mail service (eg FastMail), the mail client/app (iOS Mail, the FastMail apps or desktop clients like Thunderbird) and, depending on the mechanism used, the device operating system (eg iOS or Android) and services provided by the OS provider (Apple, Google). Without all these pieces coordinating, realtime notification of new messages doesn't work.

All this means there isn't an easy answer to the question "do you support push?" that works for all cases. We usually say "yes" because for the majority of our customers that is the correct answer.

There are various mechanisms that a mail client can use to inform the user that new mail has arrived, each with pros and cons.

IMAP

Pure IMAP clients (desktop and mobile) have traditionally had a few mechanisms available to them to do instant notifications.

Polling

By far the simplest way for a client to see if there's new mail is to just ask the server repeatedly. If it checks for new mail every minute it can come pretty close to the appearance of real-time notification.

The main downsides to this approach are network data usage and (except for "always-on" devices like desktop computers) battery life.

Network usage can be a problem if it takes a lot of work to ask the server for changes. In the worst case, you have to ask for the entire state of the mailbox on the server and compare it to a record on the device of what was there the last time it checked. Modern IMAP has mechanisms (such as CONDSTORE and QRESYNC) that allow a client to get a token from the server that encodes the current server mailbox state at that time. Next time the client checks, it can present that token to say "give me all the changes that happened since I was last here". If both the client and server support this, it makes the network usage almost nothing in the common case where there's no change.

Battery life can become a problem in that the system has to wake up regularly and hit the network to see if anything happened. This is wasteful because on most of these checks you won't have received any mail, so the device ends up waking up and going back to sleep for no real reason.

IDLE

To avoid the need to poll constantly, IMAP has a mechanism called IDLE. A client can open a folder on the server, and then "idle" on it. This holds the connection to the server open but lets the client device go to sleep. When something happens on the server, it sends sends a message to the client on that connection, which wakes the device so that it can then ask what changed.

For arbitrary IMAP clients that do no have specific support for device push channels or other mechanisms, this is usually what's happening. IDLE works, but has a couple of issues that make it less than ideal.

The main one is that IDLE only allows the client to find out about changes to a single folder. If the client wants to be notified about changes on multiple folders, it must make multiple IMAP connections, one for each folder. This makes clients more complex and may run into problems if there are many connections as some servers limit the number of simultaneous connections for a user.

The other issue, particularly on mobile devices, is that IDLE operates over TCP. This can cause problems when devices change networks (which may include moving between mobile cells), which may break the connection. Because of the way TCP operates, its not always possible for a client to detect that the connection is no longer working, which means the client has to resort to regular "pings" (typically requiring a regular wakeup) or relying on the device to tell it when the network has changed.

IDLE is good for many cases, and implemented by almost every IMAP client out there, but it's definitely the most basic option.

NOTIFY

To deal with the one-folder-per-connection problem, IMAP introduced another mechanism called NOTIFY. This allows a client to request a complex set of changes its interested in (including a list of folders and a list of change types, like "new message" or "message deleted) and be informed of them all in one go.

This is a step in the right direction, but still has the same problem in that it operates over TCP. It's also a rather complicated protocol and hard to implement correctly, which I expect is why almost no clients or servers support it. Cyrus (the server that powers FastMail) does not implement it and probably never will.

Device push services

Most (perhaps all) the mobile device and OS vendors provide a push service for their OS. This isn't limited to iOS and Android - Windows, Blackberry and even Ubuntu and the now-defunct Firefox OS all have push services.

Conceptually these all work the same way. An app asks the device OS for some sort of push token, which is a combination device & app identifier. The app sends this token to some internet service that wants to push things to it (eg FastMail). When the service has something to say, it sends a message to the push service along with token. The push service holds that message in a queue until the device contacts it and requests the messages, then it passes them along. The device OS then uses the app identifier in the token to wake up the appropriate app and pass the message to it. The app can then take the appropriate action.

Deep down, the device OS will usually implement this by asking the push service to give it any new messages. There's usually some sort of polling involved but it can also be triggered by signalling from the network layer, such as a network change. It's not substantially different to an app polling regularly, but the OS can be much more efficient because it has a complete picture of the apps that are running and their requirements as well as access to network and other hardware that might assist with this task.

FastMail's Android app

Notifications in our Android app work exactly along these lines. At startup, the app registers a push token with FastMail. When something changes in the user's mailbox, we send a message with the push token to Google's Cloud Messaging push service (or, for non-Google devices, Pushy or Amazon's Device Messaging services) to signal the change. This eventually (typically within a couple of seconds) causes the app to be woken up by the OS, and it calls back to the server to get the latest changes and display them in the notification shade.

The one downside of this mechanism is that its possible for the message from the push service to be missed. Google's push service is explicitly designed to not guarantee deliery, and will quite aggressively drop messages that can't be delivered to the device in a timely fashion (usually a couple of minutes). This can happen when the device is off-network at the time or even just turned off. For the reason, the app also asks the OS to wake it on network-change and power events, which also cause it to ask our servers for any mailbox changes. In this way, it appears that nothing gets missed.

FastMail's iOS app

The FastMail iOS app works a little differently. One interesting feature of the iOS push system is that it's possible to include a message, icon, sound, "badge" (the count of unread messages on the app icon) and action in the push message, which the OS will then display in the notification shade. In this way the app never gets woken at all. The OS itself displays the notification and invokes the action when the notification is tapped. In the case of our app, the action is to start the app proper and then open the message in question (we encode a message ID into the action we send).

This is somewhat inflexible, as we can only send the kinds of data that Apple define in their push format, and there's arguably a privacy concern in that we're sending fragments of mail contents through a third-party service (though you already have to trust Apple if you're using their hardware so it's perhaps not a concern). The main advantage is that you get to keep your battery because the app never gets woken and never hits the network to ask for changes. It's hard to get more efficient than doing nothing at all!

Since iOS 8 its been possible to have a push message wake an app for processing, just like Android and every other platform. A future release of our iOS app will take advantage of this to bring it into line.

iOS Mail

The Mail app that ships on iOS devices is probably one of the better IMAP clients out there. Apple however chose not to implement IDLE, probably because of the battery life problems. Instead they do regular polling, but the minimum poll interval is 15 minutes. This works well and keeps battery usage in check, but is not quite the timely response that most people are after. When used in conjunction with their iCloud service however, iOS Mail can do instant notifications, and its this that most people think of as push.

It works pretty much exactly like FastMail's Android app. Upon seeing that the IMAP server offers support for Apple's push mechanism, the app sends the server a list of folders that its interested in knowing about changes for, and a push token. Just as described above, when something changes the IMAP server sends a message through Apple's push service, which causes the Mail app to wake and make IMAP requests to get the changes.

The nice thing about this for an IMAP client is that it doesn't need to hold the TCP connection open. Even if it drops, as it might if there's been no new mail for hours, it can just reconnect and ask for the changes.

Of course, this mechanism is limited to the iOS Mail app with servers that support this extension. Last year, Apple were kind enough to give us everything we need to implement this feature for FastMail, and it's fast become one of our most popular features.

Exchange ActiveSync

One of the first systems to support "push mail" as its commonly understood was Microsoft's Exchange ActiveSync, so it rates a mention. Originally used on Windows Mobile as early as 2004 to synchronise with Exchange servers, it's still seen often enough, particularly on Android devices (which support it out-of-the-box). There's a lot that we could say about ActiveSync, but as a push technology there's nothing particularly unusual about it.

The main difference between it and everything else is that it doesn't have a vendor-provided push service. Ultimately, the ActiveSync "service" on the device has to regularly poll any configured Exchange servers to find out about new mail and signal this to any interested applications. While not as efficient as having the OS do it directly, it can come pretty close particularly on Windows and Android which allows long-lived background services.

Calendars and contacts

In October we added support for push for calendars and contacts on iOS and macOS as well. In terms of push, they work on exactly the same concept as IMAP - the app requests notifications for a list of calendars and addressbooks and presents a push token. The server uses that token and informs the push service, which passes the message through. The OS wakes the app and it goes back to the server and asks for updates. There are some structural differences in the this is implemented for CalDAV/CardDAV vs IMAP, but mostly it uses the same code and data model as the rest.

The future

Sadly, the state of "push" for mail is rather fragmented at the moment. Anything can implement IMAP IDLE and get something approximating push, but it's difficult (but not impossible) to make a really nice experience. To do push in a really good (and battery-friendly) way, you're tied to vendor-specific push channels.

We're currently experimenting with a few things that may or may not help to change this:

Time will tell if these experiments will go anywhere. These are the kind of things that require lots of different clients and servers to play with and see what works and what doesn't. That's not something we can do by ourselves, but if you're a mail client author and you'd like to be able to do better push than what IMAP IDLE can give you, you should talk to us!


DNSSEC & DANE: no traction yet

Published 20 Dec 2016 by Rob Mueller in FastMail Blog.

This is the twentieth post in the 2016 FastMail Advent Calendar. Stay tuned for another post tomorrow.


Back in our 2014 advent series we talked about our DNS hosting infrastructure and our desire to support DNSSEC and DANE at some point in the future. It's been two years since then, and we still don't support either of them. What gives?

At this point we don't have any particular timeline for supporting DNSSEC or DANE. To be clear, these two features are fairly interconnected for us; the main reason for supporting DNSSEC would be to support DANE. DANE provides a way for a domain to specify that it requires an encrypted connection and the SSL/TLS certificate that should be presented, rather than just accepting an opportunistically-encrypted one. This avoids a MITM downgrade attack and/or interception attack. Currently no email servers (that we're aware of) verify that the domain of a certificate matches the server name they connected to or that the certificate is issued by a known CA (Certificate Authority). This means that currently server-to-server email can be opportunistically-encrypted and thus can't be read by any intercepting party, but doesn't protect against an active MITM attack.

Unfortunately uptake of DANE has been very slow, and it appears that most major email providers (e.g. Gmail, Outlook365, Yahoo, and many more) have no interest in supporting it at all. This severely reduces the incentive to implement as it would not improve protection for the majority of email.

Instead, providers appear to be converging on a SMTP MTA Strict Transport Security protocol, analogous to the HTTP Strict Transport Security feature that tells browsers to always use https:// when connecting to a website. It's likely this will get much greater traction. We're monitoring progress and intend to implement the standard when it is complete.

Along with a lack of sites supporting DANE, there are also a whole lot of scary implications about running a DNSSEC service. DNSSEC is fragile and easy to get wrong in subtle ways. A single small mistake can completely break DNS for your domain. And worse, in our case, break the DNS for the 10,000's of domains we host for our customers.

Even some of the biggest players make mistakes. APNIC, the RIR that allocates IP addresses for the entire Asia-Pacific region (so an important and core part of the internet), managed to mess up their DNSSEC for .arpa, meaning reverse DNS lookups for a large number of IP addresses failed for some time!

Not to mention DNSSEC outages at places like nist.gov (National Institute of Standards and Technology) and even opendnssec.org (that makes DNSSEC software and attempts to "drive adoption of Domain Name System Security Extensions (DNSSEC) to further enhance Internet security") has had multiple failures.

If the people that help run the internet or write the software and encourage the use of DNSSEC can't get it right, it's scary to think what non-experts could mess up. The litany of DNSSEC outages is only likely to increase, given the tiny amount of real world uptake it's had.

We're all for security and privacy, but part of that is ensuring availability to your email as well. We want to provide real useful benefits to users with low chance of things going wrong. At the moment, the risk trade-off profile for DNSSEC/DANE doesn't seem right to us.


Arriving in Jordan

Published 18 Dec 2016 by Tom Wilson in tom m wilson.

I’ve arrived in the Middle East, in Jordan.  It is winter here.  Yesterday afternoon I visited the Amman Citadel, a raised acropolis in the centre of the capital. It lies atop a prominent hill in the centre of the city, and as you walk around the ruins of Roman civilisation you look down on box-like limestone-coloured apartment […]

Clone an abandoned MediaWiki site

Published 17 Dec 2016 by Bob Smith in Newest questions tagged mediawiki - Webmasters Stack Exchange.

Is there any way to clone a MediaWiki site that's been abandoned by the owner and all admins? None of the admins have been seen in 6 months and all attempts to contact any of them over the past 3-4 months have failed and the community is worried for the future of the Wiki. We have all put countless man-hours into the Wiki and to lose it now would be beyond devastating.

What would be the simplest way to go about this?

Thanks.


Sri Lanka: The Green Island

Published 12 Dec 2016 by Tom Wilson in tom m wilson.

I just arrived in Tangalle.  What a journey… local bus from Galle Fort. Fast paced Hindi music, big buddha in the ceiling with flashing lights, another buddha on the dash board of the bus wrapped in plastic, a driver who swung the old 1970s Leyland bus around corners to the point where any more swing […]

Spices and Power in the Indian Ocean

Published 12 Dec 2016 by Tom Wilson in tom m wilson.

I’m in Galle, on the south-east coast of Sri Lanka. From the rooftop terrace above the hotel room I’m sitting in the sound of surf gently crumbling on the reef beyond the Fort’s ramparts can be heard, and the breathing Indian ocean is glimpsed through tall coconut trees. The old city juts out into the […]

wikidiff2 1.4.1

Published 7 Dec 2016 by legoktm in The Lego Mirror.

In MediaWiki 1.28, MaxSem improved diff limits in the pure PHP diff implementation that ships with MediaWiki core. However Wikimedia and other larger wikis use a PHP extension called wikidiff2, for better performance and additional support for Japanese, Chinese, and Thai.

wikidiff2 1.4.1 is now available in Debian unstable and will ship in stretch, and should soon be available in jessie-backports and my PPA for Ubuntu Trusty and Xenial users. This is the first major update of the package in two years. And installation in MediaWiki 1.27+ is now even more straightforward, as long as the module is installed, it will automatically be used, no global configuration required.

Additionally, releases of wikidiff2 will now be hosted and signed on releases.wikimedia.org.


Tropical Architecture – Visiting Geoffrey Bawa’s Place

Published 6 Dec 2016 by Tom Wilson in tom m wilson.

I’ve arrived in Sri Lanka. Let me be honest: first impressions of Colombo bring forth descriptors like pushy, moustache-wearing, women-dominating, smog-covered, coarse, opportunistic and disheveled. It is not a city that anybody should rush to visit.  However this morning I found my way through this city to a tiny pocket of beauty and calm – the […]

Housing the Fairbairn Collection

Published 6 Dec 2016 by slwacns in State Library of Western Australia Blog.

The Fairbairn collection includes over 100 artefacts of various types; clothing, a sword,  hair ornaments made out of human hair, items used for sewing , just to name a few. All of these objects need to be stored in the best possible way.

Click to view slideshow.

Housing is the process of making protective enclosures for objects to be stored in. By housing an object or group of objects we are creating a micro environment; temperature and humidity become more stable, direct light is deflected, materials are not damaged when handled or when placed on a shelf. Housing can be a box, folder or tray that has been custom made and fitted out to the exact requirements of the object. Inert materials and/or  acid free board are used.

Some of the objects in the Fairbairn collection required conservation treatment before they were housed. For example, the leather had detached from the front of this object but was reattached during treatment.

Some objects required individual housing (for example clothing items, sword and shoes) but the majority of the objects could be housed in groups. These groups were determined by object type and the material it was made of (for example all the coin purses made from similar materials are in a group).

purses

This was done not only for ease of locating a particular object but because different material types can need different storage conditions and some materials can affect other materials if stored together (for example the vapours released from wood can cause metals to corrode).

laying-out-objects

Each object was arranged to fit into a box in such a way so that its weight would be evenly supported and so that it can be retrieved without being damaged or damaging neighbouring objects. Then layers of board and/or foam were built up to support the items.

open-box-showing-contet-including-glasses-stamp

Labels were placed to give direction on safely removing the objects from there housing. Labels were also placed on the outside of the boxes to identify what each box holds  as well as the correct way to place each object inside the box.

lables-on-housing

Custom supports were made for some objects. For example the internal support for this hat.

 

Each item in the Fairbairn collection has now been housed and placed carefully into long term storage with the rest of the State Library of Western Australia’s collection.


Filed under: SLWA collections, State Library of Western Australia, Uncategorized, WA, Western Australia Tagged: collection, conservation, Fairbairn, Housing, slwa, State Library of WA, State Library of Western Australia

Walking to the Mountain Monastery

Published 4 Dec 2016 by Tom Wilson in tom m wilson.

That little dot in the north west of south-east Asia is Chiang Mai.  As you can see there is a lot of darkness around it.  Darkness equals lots of forest and mountains. I’ve recently returned from the mountains to Chiang Mai.  Its very much a busy and bustling city, but even here people try to bring […]

Where is "MediaWiki:Vector.css" of my MediaWiki

Published 4 Dec 2016 by hasanghaforian in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I want to install Skin:Vector-DarkCSS on my MediaWiki. It must be simple, but second step of installation instructions syas I have to edit MediaWiki:Vector.css on my wiki. I searched for file with name MediaWiki:Vector.css, but could not found in MediaWiki home. Where is that file? Do I need to create that?


Forget travel guides.

Published 29 Nov 2016 by Tom Wilson in tom m wilson.

Lonely Planet talks up every country in the world, and if you read their guides every city and area seems to have a virtue worth singing. But the fact is that we can’t be everywhere and are forced to choose where to be as individuals on the face of this earth. And some places are just […]

MediaWiki VisualEditor Template autocomplete

Published 29 Nov 2016 by Patrick in Newest questions tagged mediawiki - Webmasters Stack Exchange.

Running MediaWiki 1.28, but I had this problem with 1.27, and was hoping it would be resolved.

I am using VisualEditor, and would like my users to be able to get an autocomplete when inserting a template.

I have TemplateData installed. And can confirm api.php is returning matches

62:{title: "Template:DefaultHeader", params: {},…}
117:{title: "Template:DefaultFooter", params: {},…}

But I don't get a drop down, and there is no errors in the debug console


Back That Thing Up

Published 29 Nov 2016 by Jason Scott in ASCII by Jason Scott.

img_3918

I’m going to mention two backup projects. Both have been under way for some time, but the world randomly decided the end of November 2016 was the big day, so here I am.

The first is that the Internet Archive is adding another complete mirror of the Wayback machine to one of our satellite offices in Canada. Due to the laws of Canada, to be able to do “stuff” in the country, you need to set up a separate company from your US concern. If you look up a lot of major chains and places, you’ll find they all have Canadian corporations. Well, so does the Internet Archive and that separate company is in the process of getting a full backup of the Wayback machine and other related data. It’s 15 petabytes of material, or more. It will cost millions of dollars to set up, and that money is already going out the door.

So, if you want, you can go to the donation page and throw some money in that direction and it will make the effort go better. That won’t take very long at all and you can feel perfectly good about yourself. You need read no further, unless you have an awful lot of disk space, at which point I suggest further reading.

8847193192_f56971c32d_b

Whenever anything comes up about the Internet Archive’s storage solutions, there’s usually a fluttery cloud of second-guessing and “big sky” suggestions about how everything is being done wrong and why not just engage a HBF0_X2000-PL and fark a whoziz and then it’d be solved. That’s very nice, but there’s about two dozen factors in running an Internet Archive that explain why RAID-1 and Petabyte Towers combined with self-hosting and non-cloud storage has worked for the organization. There are definitely pros and cons to the whole thing, but the uptime has been very good for the costs, and the no-ads-no-subscription-no-login model has been working very well for years. I get it – you want to help. You want to drop the scales from our eyes and you want to let us know about the One Simple Trick that will save us all.

That said, when this sort of insight comes out, it’s usually back-of-napkin and done by someone who will be volunteering several dozen solutions online that day, and that’s a lot different than coming in for a long chat to discuss all the needs. I think someone volunteering a full coherent consult on solutions would be nice, but right now things are working pretty well.

There are backups of the Internet Archive in other countries already; we’re not that bone stupid. But this would be a full, consistently, constantly maintained full backup in Canada, and one that would be interfaced with other worldwide stores. It’s a preparation for an eventuality that hopefully won’t come to pass.

There’s a climate of concern and fear that is pervading the landscape this year, and the evolved rat-creatures that read these words in a thousand years will be able to piece together what that was. But regardless of your take on the level of concern, I hope everyone agrees that preparation for all eventualities is a smart strategy as long as it doesn’t dilute your primary functions. Donations and contributions of a monetary sort will make sure there’s no dilution.

So there’s that.

Now let’s talk about the backup of this backup a great set of people have been working on.

vj243rffv_ilmy-qedvtkw_r

About a year ago, I helped launch INTERNETARCHIVE.BAK. The goal was to create a fully independent distributed copy of the Internet Archive that was not reliant on a single piece of Internet Archive hardware and which would be stored on the drives of volunteers, with 3 geographically distributed copies of the data worldwide.

Here’s the current status page of the project. We’re backing up 82 terabytes of information as of this writing. It was 50 terabytes last week. My hope is that it will be 1,000 terabytes sooner rather than later. Remember, this is 3 copies, so to do each terabyte needs three terabytes.

For some people, a terabyte is this gigantically untenable number and certainly not an amount of disk space they just have lying around. Other folks have, at their disposal, dozens of terabytes. So there’s lots of hard drive space out there, just not evenly distributed.

The IA.BAK project is a complicated one, but the general situation is that it uses the program git-annex to maintain widely-ranged backups from volunteers, with “check-in” of data integrity on a monthly basis. It has a lot of technical meat to mess around with, and we’ve had some absolutely stunning work done by a team of volunteering developers and maintainers (and volunteers) as we make this plan work on the ground.

And now, some thoughts on the Darkest Timeline.

whyyyyyy_jscott

I’m both an incredibly pessimistic and optimistic person. Some people might use the term “pragmatic” or something less charitable.

Regardless, I long ago gave up assumptions that everything was going to work out OK. It has not worked out OK in a lot of things, and there’s a lot of broken and lost things in the world. There’s the pessimism. The optimism is that I’ve not quite given up hope that something can’t be done about it.

I’ve now dedicated 10% of my life to the Internet Archive, and I’ve dedicated pretty much all of my life to the sorts of ideals that would make me work for the Archive. Among those ideals are free expression, gathering of history, saving of the past, and making it all available to as wide an audience, without limit, as possible. These aren’t just words to me.

Regardless of if one perceives the coming future as one rife with specific threats, I’ve discovered that life is consistently filled with threats, and only vigilance and dedication can break past the fog of possibilities. To that end, the Canadian Backup of the Internet Archive and the IA.BAK projects are clear bright lines of effort to protect against all futures dark and bright. The heritage, information and knowledge within the Internet Archive’s walls are worth protecting at all cost. That’s what drives me and why these two efforts are more than just experiments or configurations of hardware and location.

So, hard drives or cash, your choice. Or both!


Countryman – Retreating to North-West Thailand

Published 29 Nov 2016 by Tom Wilson in tom m wilson.

Made it to Cave Lodge in the small village of Tham Lot.  The last time I was here was seven years ago. I’m sitting on a hammock above the softly flowing river and the green valley. A deeply relaxing place. I arrived here a few days ago. We came on our motorbike taxis from the main […]

De Anza students football fandoms endure regardless of team success

Published 28 Nov 2016 by legoktm in The Lego Mirror.

Fans of the San Francisco 49ers and Oakland Raiders at De Anza College are loyal to their teams even when they are not doing well, but do prefer to win.

The Raiders lead the AFC West with a 9-2 record, while the 49ers are last in the NFC West with a 1-10 record. This is a stark reversal from 2013, when the 49ers were competing in the Super Bowl and the Raiders finished the season with a 4-12 record, as reported by The Mercury News.

49ers fans are not bothered though.

“My entire family is 49ers fans, and there is no change in our fandom due to the downturn,” said Joseph Schmidt.

Schmidt recently bought a new 49ers hat that he wears around campus.

Victor Bejarano concurred and said, “I try to watch them every week, even when they’re losing.”

A fan since 2011, he too wears a 49ers hat around campus to show his support for the team.

Sathya Reach said he has stopped watching the 49ers play not because of their downfall, but because of an increased focus on school.

“I used to watch (the 49ers) with my cousins, not so much anymore,” Reach said.

Kaepernick in 2012 Mike Morbeck/CC-BY-SA

Regardless of their support, 49ers fans have opinions on how the team is doing, mostly about 49ers quarterback Colin Kaepernick. Kaepernick protests police brutality against minorities before each game by kneeling during the national anthem. His protest placed him on the cover of TIME magazine, and ranked as the most disliked player in the NFL in a September poll conducted by E-Poll Marketing Research.

Bejarano does not follow Kaepernick’s actions off the field, but said that on the field, Kaepernick was not getting the job done.

“He does what he does, and has his own reasons,” Reach said.

Self-described Raider “fanatic” Mike Nijmeh agreed, calling Kaepernick a bad quarterback.

James Stewart, a Raiders’ fan since 5 years old, disagreed and said, “I like Kaepernick, and wouldn’t mind if he was a Raiders’ backup quarterback.”

Reader Poll: Could Derek Carr be the MVP this year?
Yes
Maybe in 5 years
Tom Brady

Both Nijmeh and Stewart praised the Raiders' quarterback, Derek Carr, and Nijmeh, dressed in his Raiders hat, jacket and jersey, said, “Carr could easily be the MVP this year.”

Stewart said that while he also thought Carr is MVP caliber, Tom Brady, the quarterback of the New England Patriots, is realistically more likely to win.

“Maybe in five years,” said Stewart, explaining that he expected Brady to have retired by then.

He is not the only one, as Raider teammate Khalil Mack considers Carr to be a potential MVP, reported USA Today. USA Today Sports’ MVP tracker has Carr in third.

Some 49ers fans are indifferent about the Raiders, others support them because of simply being in the Bay Area, and others just do not like them.

Bejarano said that he supports the Raiders because they are a Bay Area team, but that it bothers him that they are doing so well in contrast to the 49ers.

Nijmeh summed up his feelings by saying the Raiders’ success has made him much happier on Sundays.


Related Stories:

1.4.3

Published 26 Nov 2016 by mblaney in Tags from simplepie.

Merge pull request #495 from mblaney/master

New release 1.4.3


Karen Village Life

Published 26 Nov 2016 by Tom Wilson in tom m wilson.

The north-west corner of Thailand is the most sparsely populated corner of the country.  Mountains, forests and rivers, as far as the eye can see.  And sometimes a village. This village is called Menora.  Its a Karen village, without electricity or running water.  Its very, very remote and not mapped on Google Maps. Living out […]

Thai Forest Buddhism

Published 22 Nov 2016 by Tom Wilson in tom m wilson.

  The forests of Thailand have been retreat for, particularly since the 1980s.  Forest monks, who go to the forests to meditate, have seen their home get smaller and smaller.  In some cases this has prompted them to become defenders of the forest, for example performing tree ordination ceremonies, effectively ordaining a tree in saffron robes […]

Open Source at DigitalOcean: Introducing go-qemu and go-libvirt

Published 21 Nov 2016 by DigitalOcean in DigitalOcean Blog.

At DigitalOcean, we use libvirt with QEMU to create and manage the virtual machines that compose our Droplet product. QEMU is the workhorse that enables hundreds of Droplets to run on a single server within our data centers. To perform management actions (like powering off a Droplet), we originally built automation which relied on shelling out to virsh, a command-line client used to interact with the libvirt daemon.

As we began to deploy Go into production, we realized we would need simple and powerful building blocks for future Droplet management tooling. In particular, we wanted packages with:

We explored several open source packages for managing libvirt and QEMU, but none of them were able to completely fulfill our wants and needs, so we created our own: go-qemu and go-libvirt.

How Do QEMU and go-qemu Work?

QEMU provides the hardware emulation layer between Droplets and our bare metal servers. Each QEMU process provides a JSON API over a UNIX or TCP socket, much like a REST API you might find when working with web services. However, instead of using HTTP, it communicates over a protocol known as the QEMU Monitor Protocol (QMP). When you request an action, like powering off a Droplet, the request eventually makes its way to the QEMU process via the QMP socket in the form of { "execute" : "system_powerdown" }.

go-qemu is a Go package that provides a simple interface for communicating with QEMU instances over QMP. It enables the management of QEMU virtual machines directly, using either the monitor socket of a VM or by proxying the request through libvirt. All go-qemu interactions rely on the qemu.Domain and qmp.Monitor types. A qemu.Domain is constructed with an underlying qmp.Monitor, which understands how to speak to the monitor socket of a given VM.

How Do libvirt and go-libvirt Work?

libvirt was designed for client-server communication. Users typically interact with the libvirt daemon through the command-line client virsh. virsh establishes a connection to the daemon either through a local UNIX socket or a TCP connection. Communication follows a custom asynchronous protocol whereby each RPC request or response is preceded by a header describing the incoming payload. Most notably, the header contains a procedure identifier (e.g,. "start domain"), the type of request (e.g., call or reply), and a unique serial number used to correlate RPC calls with their respective responses. The payload following the header is XDR encoded providing an architecture-agnostic method for describing strict data types.

go-libvirt is a Go package which provides a pure Go interface to libvirt. go-libvirt can be used in conjunction with go-qemu to manage VMs by proxying communication through the libvirt daemon.

go-libvirt exploits the availability of the RPC protocol to communicate with libvirt without the need for cgo and C bindings. While using the libvirt's C bindings would be easier up front, we try to avoid cgo when possible. Dave Cheney has written an excellent blog post which mirrors many of our own findings. A pure Go library simplifies our build pipelines, reduces dependency headaches, and keeps cross-compilation simple.

By circumventing the C library, we need to keep a close eye on changes in new libvirt releases; libvirt developers may modify the RPC protocol at any time, potentially breaking go-libvirt. To ensure stability and compatibility with various versions of libvirt, we install and run it within Travis CI which allows integration tests to be run for each new commit to go-libvirt.

Example

The following code demonstrates usage of go-qemu and go-libvirt to interact with all libvirt-managed virtual machines on a given hypervisor.

package main

import (
    "fmt"
    "log"
    "net"
    "time"

    "github.com/digitalocean/go-qemu/hypervisor"
)

func main() {
    driver := hypervisor.NewRPCDriver(func() (net.Conn, error) {
        return net.DialTimeout("unix", "/var/run/libvirt/libvirt-sock", 2*time.Second)
    })

    hv := hypervisor.New(driver)

    fmt.Println("Domain\t\tQEMU Version")
    fmt.Println("--------------------------------------")
    domains, err := hv.Domains()
    if err != nil {
        log.Fatal(err)
    }

    for _, dom := range domains {
        version, err := dom.Version()
        if err != nil {
            log.Fatal(err)
        }

        fmt.Printf("%s\t\t%s\n", dom.Name, version)
        dom.Close()
    }
}

Output

Domain        QEMU Version
----------------------------
Droplet-1        2.7.0
Droplet-2        2.6.0
Droplet-3        2.5.0

What's Next?

Both go-qemu and go-libvirt are still under active development, in the future, we intend to provide an optional cgo QMP monitor which wraps the libvirt C API using the libvirt-go package.

go-qemu and go-libvirt are used in production at DigitalOcean, but the APIs should be treated as unstable, and we recommend that users of these packages vendor them into their applications.

We welcome contributions to the project! In fact, a recent major feature in the go-qemu project was contributed by an engineer outside of DigitalOcean. David Anderson is working on a way to automatically generate QMP structures using the QMP specification in go-qemu. This will save an enormous amount of tedious development and enables contributors to simply wrap these raw types in higher-level types to provide a more idiomatic interface to interact with QEMU instances.

If you'd like to join the fun, feel free to open a GitHub pull-request, file an issue, or join us on IRC (freenode/#go-qemu).

Edit: as clarified by user "eskultet" in our IRC channel, libvirt does indeed guarantee API and ABI stability, and the RPC layer is able to detect any extra or missing elements that would cause the RPC payload to not meet a fixed size requirement. This blog has been updated to reflect this misunderstanding.


In Which I Tell You It’s A Good Idea To Support a Magazine-Scanning Patreon

Published 20 Nov 2016 by Jason Scott in ASCII by Jason Scott.

So, Mark Trade and I have never talked, once.

All I know about Mark is that due to his efforts, over 200 scans of magazines are up on the Archive.

headlock

These are very good scans, too. The kind of scans that a person looking to find a long-lost article, verify a hard-to-grab fact, or needs to pass along to others a great image would kill to have. 600 dots per inch, excellent contrast, clarity, and the margins cut just right.

cd-rom_today_05_aprmay_1994_0036

So, I could fill this entry with all the nice covers, but covers are kind of easy, to be frank. You put them face down on the scanner, you do a nice big image, and then touch it up a tad. The cover paper and the printing is always super-quality compared to the rest, so it’ll look good:

cd-rom_today_05_aprmay_1994_0000

But the INSIDE stuff… that’s so much harder. Magazines were often bound in a way that put the images RIGHT against the binding and not every magazine did the proper spacing and all of it is very hard to shove into a scanner and not lose some information. I have a lot of well-meaning scans in my life with a lot of information missing.

But these…. these are primo.

pcgames_01_fall_1988_0011

pcgames_01_fall_1988_0012

pcgames_01_fall_1988_0073

When I stumbled on the Patreon, he had three patrons giving him $10 a month. I’d like it to be $500, or $1000. I want this to be his full-time job.

Reading the patreon page’s description of his process shows he’s taking it quite seriously. Steaming glue, removing staples. I’ve gone on record about the pros and cons of destructive scanning, but game magazines are not rare, just entirely unrepresented in scanned items compared to how many people have these things in their past.

I read something like this:

It is extremely unlikely that I will profit from your pledge any time soon. My scanner alone was over $4,000 and the scanning software was $600. Because I’m working with a high volume of high resolution 600 DPI images I purchased several hard drives including a CalDigit T4 20TB RAID array for $2,000. I have also spent several thousand dollars on the magazines themselves, which become more expensive as they become rarer. This is in addition to the cost of my computer, monitor, and other things which go into the creation of these scans. It may sound like I’m rich but really I’m just motivated, working two jobs and pursuing large projects.

…and all I think about is, this guy is doing so much amazing work that so many thousands could be benefiting from, and they should throw a few bucks at him for his time.

My work consists of carefully removing individual pages from magazines with a heat gun or staple-remover so that the entire page may be scanned. Occasionally I will use a stack paper cutter where appropriate and will not involve loss of page content. I will then scan the pages in my large format ADF scanner into 600 DPI uncompressed TIFFs. From there I either upload 300 DPI JPEGs for others to edit and release on various sites or I will edit them myself and store the 600 DPI versions in backup hard disks. I also take photos of magazines still factory-sealed to document their newsstand appearance. I also rip full ISOs of magazine coverdiscs and make scans of coverdisc sleeves on a color-corrected flatbed scanner and upload those to archive.org as well.

This is the sort of thing I can really get behind.

The Internet Archive is scanning stuff, to be sure, but the focus is on books. Magazines are much, much harder to scan – the book scanners in use are just not as easy to use with something bound like magazines are. The work that Mark is doing is stuff that very few others are doing, and to have canonical scans of the advertisements, writing and materials from magazines that used to populate the shelves is vital.

Some time ago, I’ve given all my collection of donated Game-related magazines to the Museum of Art and Digital Entertainment, because I recognized I couldn’t be scanning them anytime soon, and how difficult it was going to be to scan it. It would take some real major labor I couldn’t personally give.

Well, here it is. He’s been at it for a year. I’d like to see that monthly number jump to $100/month, $500/month, or more. People dropping $5/month towards this Patreon would be doing a lot for this particular body of knowledge.

Please consider doing it.

Thanks.


A Simple Explanation: VLC.js

Published 17 Nov 2016 by Jason Scott in ASCII by Jason Scott.

The previous entry got the attention it needed, and the maintainers of the VLC project connected with both Emularity developers and Emscripten developers and the process has begun.

The best example of where we are is this screenshot:

vlcjs

The upshot of this is that a javascript compiled version of the VLC player now runs, spits out a bunch of status and command line information, and then gets cranky it has no video/audio device to use.

With the Emularity project, this was something like 2-3 months into the project. In this case, it happened in 3 days.

The reasons it took such a short time were multi-fold. First, the VLC maintainers jumped right into it at full-bore. They’ve had to architect VLC for a variety of wide-ranging platforms including OSX, Windows, Android, and even weirdos like OS/2; to have something aimed at “web” is just another place to go. (They’d also made a few web plugins in the past.) Second, the developers of Emularity and Emscripten were right there to answer the tough questions, the weird little bumps and switchbacks.

Finally, everybody has been super-energetic about it – diving into the idea, without getting hung up on factors or features or what may emerge; the same flexibility that coding gives the world means that the final item will be something that can be refined and improved.

So that’s great news. But after the initial request went into a lot of screens, a wave of demands and questions came along, and I thought I’d answer some of them to the best of my abilities, and also make some observations as well.

lunettes

When you suggest something somewhat crazy, especially in the programming or development world, there’s a variant amount of response. And if you end up on Hackernews, Reddit, or a number of other high-traffic locations, those reactions fall into some very predictable areas:

So, quickly on some of these:

But let’s shift over to why I think this is important, and why I chose VLC to interact with.

First, VLC is one of those things that people love, or people wish there was something better than, but VLC is what we have. It’s flexible, it’s been well-maintained, and it has been singularly focused. For a very long time, the goal of the project has been aimed at turning both static files AND streams into something you can see on your machine. And the machine you can see it on is pretty much every machine capable of making audio and video work.

Fundamentally, VLC is a bucket that, when dropped into with a very large variance of sound-oriented or visual-oriented files and containers, will do something with them. DVD ISO files become playable DVDs, including all the features of said DVDs. VCDs become craptastic but playable DVDs. MP3, FLAC, MIDI, all of them fall into VLC and start becoming scrubbing-ready sound experiences. There are quibbles here and there about accuracy of reproduction (especially with older MOD-like formats like S3M or .XM) but these are code, and fixable in code. That VLC doesn’t immediately barf on the rug with the amount of crapola that can be thrown at it is enormous.

And completing this thought, by choosing something like VLC, with its top-down open source condition and universal approach, the “closing of the loop” from VLC being available in all browsers instantly will ideally cause people to find the time to improve and add formats that otherwise wouldn’t experience such advocacy. Images into Apple II floppy disk image? Oscilloscope captures? Morse code evaluation? Slow Scan Television? If those items have a future, it’s probably in VLC and it’s much more likely if the web uses a VLC that just appears in the browser, no fuss or muss.

vlc-media-player-dowload-for-windows

Fundamentally, I think my personal motivations are pretty transparent and clear. I help oversee a petabytes-big pile of data at the Internet Archive. A lot of it is very accessible; even more of it is not, or has to have clever “derivations” pulled out of it for access. You can listen to .FLACs that have been uploaded, for example, because we derive (noted) mp3 versions that go through the web easier. Same for the MPG files that become .mp4s and so on, and so on. A VLC that (optionally) can play off the originals, or which can access formats that currently sit as huge lumps in our archives, will be a fundamental world changer.

Imagine playing DVDs right there, in the browser. Or really old computer formats. Or doing a bunch of simple operations to incoming video and audio to improve it without having to make a pile of slight variations of the originals to stream. VLC.js will do this and do it very well. The millions of files that are currently without any status in the archive will join the millions that do have easy playability. Old or obscure ideas will rejoin the conversation. Forgotten aspects will return. And VLC itself, faced with such a large test sample, will get better at replaying these items in the process.

This is why this is being done. This is why I believe in it so strongly.

besthook

I don’t know what roadblocks or technical decisions the team has ahead of it, but they’re working very hard at it, and some sort of prototype seems imminent. The world with this happening will change slightly when it starts working. But as it refines, and as these secondary aspects begin, it will change even more. VLC will change. Maybe even browsers will change.

Access drives preservation. And that’s what’s driving this.

See you on the noisy and image-filled other side.


School Magazines

Published 17 Nov 2016 by leonieh in State Library of Western Australia Blog.

avon_northam_june_1939_cover_2016-10-26_0936School magazines provide a fascinating glimpse into the past.

What was high school like from 1915 through to the 1950s? What issues interested teenagers? How did they react to current events including two world wars? In what ways did they express themselves differently from today’s teens? What sort of jokes did they find amusing? (Hint: there are many of what we would call “dad jokes”.)

The State Library holds an extensive collection of school magazines from both public and private schools. Most don’t start until after 1954 which, as with newspapers, is our cut-off date for digitising, but we have digitised some early issues from public schools.

 

In the first part of the 19th Century they were generally produced by the students, with minimal input from school staff – and it shows. The quality of individual issues varies widely, depending, most probably, on the level of talent, interest and time invested by the responsible students.

avon_sept_1930_p_11_2016-10-25_1641

Cricket cartoon Northam High School (The Avon) Sept. 1930

These magazines may include named photographs of prefects and staff, sporting teams and academic prize winners. Photographs from early editions tend to be of much higher quality, possibly because they were taken using glass negatives.

pgs-nov-1922

Essay competition. The subject: “A letter from Mr Collins congratulating Elizabeth on her engagement to Mr Darcy”  Phyllis Hand and Jean McIntyre were the prize winners.      Perth Girls’ School Magazine Nov. 1922

You will find poetry and essays, sketches by and of students, amateur cartooning, and many puns, jokes and limericks.

Some issues include ex-student notes with news about the careers, marriages and movements of past students. There is an occasional obituary.

avon_june_1943_ex_students_2016-10-25_1651

Northam High School (The Avon) June 1943

northam-high-school-the-avon-may-1925-twins_meckering_2016-10-24_1750

Does anyone know these twins from Meckering?  Northam High School (The Avon) May 1925

Issues from the war years are particularly interesting and touching. You may also find rolls of honour naming ex-students serving in the forces.

There is also often advertising for local businesses.

 

 

 

 

 

 

 

 

 

Click to view slideshow.

boronia_dec_1925_girls_a_hockey_team_2016-10-25_1657

Girls’ A Hockey Team Albany High School (Boronia) Dec. 1925

These magazines reflect the attitudes of their tight-knit local community of the time.  Expect to hear the same exhortations to strive for academic, moral and sporting excellence that we hear in schools today – while observing the (in retrospect) somewhat naïve patriotism and call to Empire and the occasional casual racism.

 

Click to view slideshow.

The following high school magazines for various dates are either available now online or will appear in the coming weeks: Perth Boys’ School MagazinePerth Girls’ School Magazine (later The Magpie); Fremantle Boys’ School; Northam High School (The Avon); Girdlestone High School (Coolibah); Eastern Goldfields Senior High School (The Golden Mile – later Pegasus); Bunbury High School (Kingia); Albany High School (Boronia) and Perth Modern (The Sphinx). None are complete and we would welcome donations of missing volumes to add to our Western Australian collections.

If you would like to browse our digitised high school magazines search the State Library catalogue using the term: SCHOOL MAGAZINES

*Some issues of The Magpie are too tightly bound for digitising so they are currently being disbound. They will then be digitised and rebound. Issues should appear in the catalogue in the near future.


Filed under: Family History, SLWA collections, SLWA news, State Library of Western Australia, Uncategorized, WA history, Western Australia Tagged: albany school, bunbury school, digitised magazines, fremantle boys' school, goldfields school, northam school, online magazines wa, perth boys' school, perth modern school, school magazines, State Library of WA, State Library of Western Australia, WA history

What do "Pro" users want?

Published 16 Nov 2016 by Carlos Fenollosa in Carlos Fenollosa — Blog.

My current machine is a 2013 i7 Macbook Air. It doesn't have the Pro label, however, It has two USB 3.0 ports, an SD slot and a Thunderbolt port. 12 hours of battery life. One of the best non-retina screens around. Judging by this week's snarky comments, it's more Pro than the 2016 Macbook Pro.

Me, I love this laptop. In fact, I love it so much that I bought it to replace an older MBA. I really hoped that Apple would keep selling the same model with a Retina screen and bumped specs.

But is it a Pro computer or not? Well, let me twist the language. I make my living with computers, so by definition it is. Let's put it another way around: I could have spent more money for a machine which has Pro in its name, but that wouldn't have improved my work output.

What is a Pro user?

So there's this big discussion on whether the Pro label means anything for Apple.

After reading dozens of reviews and blog posts, unsurprisingly, one discovers that different people have different needs. The bottom line is that a Pro user is someone who needs to get their work done and cannot tolerate much bullshit with their tools.

In my opinion, the new Macbook Pros are definitely a Pro machine, even with some valid criticisms. Apple product releases are usually followed by zesty discussions, but this time it's a bit different. It's not only angry Twitter users who are complaining; professional reviewers, engineers, and Pro users have also voiced their concerns.

I think we need to stop thinking that Apple is either stupid or malevolent. They are neither. As a public company, the metric by which their executives are evaluated is stock performance. Infuriating users for no reason only leads to decreasing sales, less benefits, and unhappy investors.

I have some theories on why Apple seems to care less about the Mac, and why many feel the need to complain.

Has the Pro market changed?

Let's be honest: for the last five years Apple probably had the best and most popular computer lineup and pricing in their history. All markets (entry, pro, portability, desktops) had fantastic machines which were totally safe to buy and recommend, at extremely affordable prices.

I've seen this myself. In Spain, as one of the poorest EU countries, Apple is not hugely popular. Macs and iPhones are super expensive, and many find it difficult to justify an Apple purchase on their <1000€ salary.

However, in the last three to five years, everybody seemed to buy a Mac, even friends of mine who swore they would never do it. They finally caved in, not because of my advice, but because their non-nerd friends recommend MBPs. And that makes sense. In a 2011 market saturated by ultraportables, Windows 8, and laptops which break every couple years, Macs were a great investment. You can even resell them after five years for 50% of their price, essentially renting them for half price.

So what happened? Right now, not only Pros are using the Macbook Pro. They're not a professional tool anymore, they're a consumer product. Apple collects usage analytics for their machines and, I suppose, makes informed decisions, like removing less used ports or not increasing storage on iPhones for a long time.

What if Apple is being fed overwhelmingly non-Pro user data for their Pro machines and, as a consequence, their decisions don't serve Pro users anymore, but rather the general public?

First, let's make a quick diversion to address the elephant in the room because, after all, I empathize with the critics.

Apple is Apple

Some assertions you can read on the Internet seem out of touch with a company which made the glaring mistake of building a machine without a floppy, released a lame mp3 player without wireless and less space than a Nomad, tried to revolutionize the world with a phone without a keyboard, and produced an oversized iPhone which is killing the laptop in the consumer market.

Apple always innovates. You can agree whether the direction is correct, but they do. They also copy, and they also steal, like every other company.

What makes them stand out is that they are bolder, dare I say, more courageous than others, to the point of having the courage to use the word courage to justify an unpopular technical decision.

They take more risks on their products. Yes, I think that the current audio jack transition could've been handled better, but they're the first "big brand" to always make such changes on their core products.

This brings us to my main gripe with the current controversy. I applaud their strategy of bringing iPhone ideas, both hardware and software, to the Mac. That is a fantastic policy. You can design a whole device around a touch screen and a secure enclave, then miniaturize it and stick it on a Macbook as a Touch Bar.

Having said that, us pros are generally conservative: we don't update our OS until versions X.1 or X.2, we need all our tools to be compatible, and we don't usually buy first-gen products, unless we self-justify our new toy as a "way to test our app experience on users who have this product".

The Great Criticism Of The 2016 Macbook Pro is mainly fueled by customers who wanted something harder, better, faster, stronger (and cheaper) and instead they got a novel consumer machine with few visible Pro improvements over the previous one and some prominent drawbacks.

Critical Pros are disappointed because they think Apple no longer cares about them. They feel they have no future using products from this company they've long invested in. Right now, there is no clear competitor to the Mac, but if it were, I'm sure many people would vote with their wallets to the other guy.

These critics aren't your typical Ballmers bashing the iPhone out of spite. They are concerned, loyal customers who have spent tens of thousands of dollars in Apple's products.

What's worse, Apple doesn't seem to understand the backlash, as shown by recent executive statements. Feeling misunderstood just infuriates people more, and there are few things as powerful as people frustrated and disappointed with the figures and institutions they respect.

Experiment, but not on my lawn

If I could ask Apple for just one thing, it would be to restrict their courage to the consumer market.

'Member the jokes about the 2008 Macbook Air? Only one port, no DVD drive?

The truth is, nobody cared because that machine was clearly not for them; it was an experiment, which if I may say so, turned out to be one of the most successful ever. Eight years later, many laptops aspire to be a Macbook Air, and the current entry Apple machine, the Macbook "One", is only an iteration on that design.

Nowadays, Apple calls the Retina MBA we had been waiting for a "Macbook Pro". That machine has a 15W CPU, only two ports—one of which is needed for charging—, good enough internals, and a great battery for light browsing which suffers on high CPU usage.

But when Apple rebrands this Air as a Pro, real pros get furious, because that machine clearly isn't for them. And this time, to add more fuel to the fire, the consumer segment gets furious too, since it's too expensive, to be exact, $400 too expensive.

By making the conscious decision of positioning this as a Pro machine both in branding and price point, Apple is sending the message that they really do consider this a Pro machine.

One unexpected outcome of this crisis

Regardless, there is one real, tangible risk for Apple.

When looking at the raw numbers, what Apple sees is this: 70% of their revenue comes from iOS devices. Thus, they prioritize around 70% of company resources to that segment. This makes sense.

Unless.

Unless there is an external factor which drives iPhone sales: the availability of iPhone software, which is not controlled by Apple. This software is developed by external Pros. On Macs.

The explosion of the iOS App Store has not been a coincidence. It's the combination of many factors, one of which is a high number of developers and geeks using a Mac daily, thanks to its awesomeness and recent low prices. How many of us got into iPhone development just because Xcode was right there in our OS?

Similarly to how difficult it is to find COBOL developers because barely anyone learns it anymore, if most developers, whichever their day job is, start switching from a Mac to a PC, the interest for iOS development will dwindle quickly.

In summary, the success of the iPhone is directly linked to developer satisfaction with the Mac.

This line of reasoning is not unprecedented. In the 90s, almost all developers were using the Microsoft platform until Linux and OSX appeared. Nowadays, Microsoft is suffering heavily for their past technical decisions. Their mobile platform crashed not because the phones were bad, but because they had no software available.

Right now, Apple is safe, and Pro users will keep using Macs not only thanks to Jobs' successful walled garden strategy, but also because they are the best tools for the job.

While Pro users may not be trend-setters, they win in the long term. Linux won in the server. Apple won the smartphone race because it had already won the developer race. They made awesome laptops and those of us who were using Linux just went ahead and bought a Mac.

Apple thinks future developers will code on iPads. Maybe that's right 10 years from now. The question is, can they save this 10-year gap between current developers and future ones?

The perfect Pro machine

This Macbook Pro is a great machine and, with USB-C ports, is future proof.

Dongles and keyboards are a scapegoat. Criticisms are valid, but I feel they are unjustly directed to this specific machine instead of Apple's strategy in general. Or, at least, the tiny part that us consumers see.

Photographers want an SD slot. Developers want more RAM for their VMs. Students want lower prices. Mobile professionals want an integrated LTE chip. Roadies want more battery life. Here's my wish, different than everybody else's: I want the current Macbook Air with a Retina screen and 20 hours of battery life (10 when the CPU is peaking)

Everybody seems to be either postulating why this is not a Pro machine or criticizing the critics. And they are all right.

Unfortunately, unless given infinite resources, the perfect machine will not exist. I think the critics know that, even if many are projecting their rage on this specific machine.

A letter to Santa

Pro customers, myself included, are afraid that Apple is going to stab them on the back in a few years, and Apple is not doing anything substantial to reduce these fears.

In computing, too, perception is as important as cold, hard facts.

Macs are a great UNIX machine for developers, have a fantastic screen for multimedia Pros, get amazing build quality value for budget constrained self-employed engineers, work awesomely with audio setups thanks to almost inaudible fans, triple-A software is available, and you can even install Windows.

We have to admit that us Pros are mostly happily locked in the Apple ecosystem. When we look for alternatives, in many cases, we only see crap. And that's why we are afraid. Is it our own fault? Of course, we are all responsible for our own decisions. Does this mean we have no right to complain?

Apple, if you're listening, please do:

  1. Remember that you sell phones because there's people developing apps for them.
  2. Ask your own engineers which kind of machine they'd like to develop on. Keep making gorgeous Starbucks ornaments if you wish, but clearly split the product lines and the marketing message so all consumers feel included.
  3. Many iOS apps are developed outside the US and the current price point for your machines is too high for the rest of the world. I know we pay for taxes, but even when accounting for that, a bag of chips, an apartment, or a bike doesn't cost the same in Manhattan than in Barcelona.
  4. Keep making great hardware and innovating, but please, experiment with your consumer line, not your Pro line.
  5. Send an ACK to let us Pros recover our trust in you. Unfortunately, at this point, statements are not enough.

Thank you for reading.

Tags: hardware, apple

Comments? Tweet  


Sukhothai: The Dawn of Happiness

Published 16 Nov 2016 by Tom Wilson in tom m wilson.

  It is early morning in Sukhothai, the first capital of present day Thailand, in the north of the country.  From the Sanskrit, Sukhothai means ‘dawn of happiness’.  The air is still cool this morning, and the old city is empty of all but two or three tourists.  Doves coo gently from ancient stone rooftops. […]

Open Source at Its (Hacktober)best

Published 15 Nov 2016 by DigitalOcean in DigitalOcean Blog.

The third-annual Hacktoberfest, which wrapped up October 31, brought a community of project maintainers, seasoned contributors, and open-source beginners together to give back to many great projects. It was a record setting year which confirmed the power of communities in general, and specifically the open source community.

Here's what you accomplished in a nutshell:

In this post, we'll get more into numbers and will share some stories from contributors, maintainers, and communities across the world.

Contributors

We put the challenge out there and you stepped up to exceed it! Congratulations to both first-time open source contributors and experienced contributors who set aside time and resources to push the needle forward for thousands of open source projects.

This year, we had a record number of contributors from around the world participate:

Developers around the world shared their stories with us, explaining what Hacktoberfest meant to them. One contributor who completed the challenge said:

I am a senior computer science student but have always been too intimidated to submit to other open github projects. Hacktoberfest gave me a reason to do that and I am really glad I did. I will for sure be submitting a lot more in the future.

Aditya Dalal from Homebrew Cask went from being a Hacktoberfest contributor in 2015 to being a project maintainer in 2016:

I actually started contributing to Open Source in a meaningful way because of Hacktoberfest. Homebrew Cask was a convenient tool in my daily usage, and Hacktoberfest provided an extra incentive to contribute back. Over time, I continued contributing and ended up as a maintainer, focusing on triaging issues and making the contribution process as simple as possible (which I like to think we have succeeded at).

Maintainers

A HUGE and very special shout out goes out to project maintainers. Many of you added "Hacktoberfest" labels (+15,000) to project issues and tweeted out your projects, encouraging others to join in on the fun. We know that Hacktoberfest makes things busier than usual. Thank you for setting a great example for future project maintainers—without you, Hacktoberfest wouldn't be possible!

Some maintainers went out of their way to make sure contributors had a great experience:

…and others created awesome challenges:

Events

This year, we wanted to highlight the collaborative aspect of open source and created a Hacktoberfest-themed Meetup Kit with tips and tools for anyone who wanted to organize a Hacktoberfest event.

As a result, Hacktoberfest meetups popped up all over the world. More than 30 communities held 40 events in 29 cities across 12 countries including Cameroon,Canada, Denmark, Finland, France, India, Kenya, New Zealand, Spain, Ukraine, UK, and the US (click here to see a full list of Hacktoberfest events).

Thank you to event organizers who brought your communities together through pair programming, mentorship, demos, workshops, and hack fests.

If you didn't have a chance to attend a Hacktoberfest-themed event near you, we encourage you to host one anytime or suggest the idea to your favorite meetup.

Hacktoberfest Paris Meetup by SigfoxFullstack Open Source | Hacktober Edition, Los Angeles, California, USA
Hacktober Night by BlackCodeCollective, Arlington, Virginia, USAHacktober Fest Meetup at NITK Surathkal, Mangalore, India

Clockwise, from top left: Hacktoberfest Paris Meetup by Sigfox, Paris, France, Fullstack Open Source | Hacktober Edition, Los Angeles, California, USA, Hacktober Fest Meetup at NITK Surathkal, Mangalore, India, and Hacktober Night by BlackCodeCollective, Arlington, Virginia, USA.

Beyond 2016

Thank you to our friends at GitHub for helping us make Hacktoberfest 2016 possible. And special thanks go out to our friends at Mozilla, Intel, and CoreOS for supporting the initiative.

Tell us: What did you enjoy about Hacktoberfest this year? What can we do to make it even better next year? Let us know in the comments.

Until we meet again—happy hacking!


What is the mediawiki install path on Ubuntu when you install it from the Repos?

Published 15 Nov 2016 by Akiva in Newest questions tagged mediawiki - Ask Ubuntu.

What is the mediawiki install path on Ubuntu when you install it from the Repos?

Specifically looking for the extensions folder.


Working the polls: reflection

Published 9 Nov 2016 by legoktm in The Lego Mirror.

As I said earlier, I worked the polls from 6 a.m. to roughly 9:20 p.m. We had one voter come in at the nick of time at 7:59 p.m.

I was glad to see that we had a lot of first time voters, as well as some who just filled out one issue on the three(!) page ballot, and then left. Overall, I've come to the conclusion that everyone is just like me and votes just to get a sticker. We had quite a few people who voted by mail and stopped by just to get their "I voted!" sticker.

I should get paid $145 for working, which I shall be donating to https://riseup.net/. And I plan to be helping out during the next election!


HSTS header not being sent though rule is present and mod_headers is enabled

Published 5 Nov 2016 by jww in Newest questions tagged mediawiki - Server Fault.

We enabled HSTS in httpd.conf in the Virtual Host handling port 443. We tried with and without the <IfModule mod_headers.c>:

<IfModule mod_headers.c>
    Header set Strict-Transport-Security "max-age=10886400; includeSubDomains"
</IfModule>

But the server does not include the header in a response. Below is from curl over HTTPS:

> GET / HTTP/1.1
> Host: www.cryptopp.com
> User-Agent: curl/7.51.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Sat, 05 Nov 2016 22:49:25 GMT
< Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips
< Last-Modified: Wed, 02 Nov 2016 01:27:08 GMT
< ETag: "8988-5404756e12afc"
< Accept-Ranges: bytes
< Content-Length: 35208
< Vary: Accept-Encoding
< Content-Type: text/html; charset=UTF-8

The relevant section of httpd.conf is shown below. The cURL transcript is shown below. Apache shows mod_header is loaded, and grepping all the logs don't reveal an error.

The Apache version is Apache/2.4.6 (CentOS). The PHP version is 5.4.16 (cli) (built: Aug 11 2016 21:24:59). The Mediawiki version is 1.26.4.

What might be the problem here, and how could I solve this?


httpd.conf

<VirtualHost *:80>
    ServerName www.cryptopp.com
    ServerAlias *.cryptopp.com *.cryptopp.* cryptopp.com

    <IfModule mod_rewrite.c>
        RewriteEngine On
        RewriteCond %{REQUEST_METHOD} ^TRACE
        RewriteRule .* - [F]
        RewriteCond %{REQUEST_METHOD} ^TRACK
        RewriteRule .* - [F]
        #redirect all port 80 traffic to 443
        RewriteCond %{SERVER_PORT} !^443$
        RewriteRule ^/?(.*) https://www.cryptopp.com/$1 [L,R]
   </IfModule>
</VirtualHost>

<VirtualHost *:443>
    ServerName www.cryptopp.com
    ServerAlias *.cryptopp.com *.cryptopp.* cryptopp.com

    <IfModule mod_headers.c>
        Header set Strict-Transport-Security "max-age=10886400; includeSubDomains"
    </IfModule>
</VirtualHost>

mod_headers

# cat /etc/httpd/conf.modules.d/00-base.conf | grep headers
LoadModule headers_module modules/mod_headers.so

# httpd -t -D DUMP_MODULES | grep header
 headers_module (shared)

error logs

# grep -IR "Strict-Transport-Security" /etc
/etc/httpd/conf/httpd.conf:        Header set Strict-Transport-Security "max-age=10886400; includeSubDomains" env=HTTPS  
# grep -IR "Strict-Transport-Security" /var/log/
# grep -IR "mod_headers" /var/log/
#

.htaccess

# find /var/www -name '.htaccess' -printf '%p\n' -exec cat {} \;
/var/www/html/w/cache/.htaccess
Deny from all
/var/www/html/w/languages/.htaccess
Deny from all
/var/www/html/w/extensions/MobileFrontend/dev-scripts/.htaccess
Deny from all
/var/www/html/w/maintenance/archives/.htaccess
Deny from all
/var/www/html/w/maintenance/.htaccess
Deny from all
/var/www/html/w/serialized/.htaccess
Deny from all
/var/www/html/w/images/temp/.htaccess
# Protect against bug 28235
<IfModule rewrite_module>
    RewriteEngine On
    RewriteCond %{QUERY_STRING} \.[^\\/:*?\x22<>|%]+(#|\?|$) [nocase]
    RewriteRule . - [forbidden]
</IfModule>
/var/www/html/w/images/.htaccess
# Protect against bug 28235
<IfModule rewrite_module>
    RewriteEngine On
    RewriteCond %{QUERY_STRING} \.[^\\/:*?\x22<>|%]+(#|\?|$) [nocase]
    RewriteRule . - [forbidden]
    # Fix for bug T64289
    Options +FollowSymLinks
</IfModule>
/var/www/html/w/images/deleted/.htaccess
Deny from all
/var/www/html/w/includes/.htaccess
Deny from all
/var/www/html/.htaccess
RewriteEngine on
RewriteRule ^wiki/?(.*)$ /w/index.php?title=$1 [L,QSA]
<IfModule mod_deflate.c>
<FilesMatch "\.(js|css|html)$">
SetOutputFilter DEFLATE
</FilesMatch>
</IfModule>

curl transcript

$ /usr/local/bin/curl -Lv cryptopp.com
* Rebuilt URL to: cryptopp.com/
*   Trying 192.210.150.121...
* TCP_NODELAY set
* Connected to cryptopp.com (192.210.150.121) port 80 (#0)
> GET / HTTP/1.1
> Host: cryptopp.com
> User-Agent: curl/7.51.0
> Accept: */*
> 
< HTTP/1.1 302 Found
< Date: Sat, 05 Nov 2016 22:49:25 GMT
< Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips
< Location: https://www.cryptopp.com/
< Content-Length: 209
< Content-Type: text/html; charset=iso-8859-1
< 
* Ignoring the response-body
* Curl_http_done: called premature == 0
* Connection #0 to host cryptopp.com left intact
* Issue another request to this URL: 'https://www.cryptopp.com/'
*   Trying 192.210.150.121...
* TCP_NODELAY set
* Connected to www.cryptopp.com (192.210.150.121) port 443 (#1)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /opt/local/share/curl/curl-ca-bundle.crt
  CApath: none
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: OU=Domain Control Validated; OU=COMODO SSL Unified Communications
*  start date: Sep 17 00:00:00 2015 GMT
*  expire date: Sep 16 23:59:59 2018 GMT
*  subjectAltName: host "www.cryptopp.com" matched cert's "www.cryptopp.com"
*  issuer: C=GB; ST=Greater Manchester; L=Salford; O=COMODO CA Limited; CN=COMODO RSA Domain Validation Secure Server CA
*  SSL certificate verify ok.
> GET / HTTP/1.1
> Host: www.cryptopp.com
> User-Agent: curl/7.51.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Sat, 05 Nov 2016 22:49:25 GMT
< Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips
< Last-Modified: Wed, 02 Nov 2016 01:27:08 GMT
< ETag: "8988-5404756e12afc"
< Accept-Ranges: bytes
< Content-Length: 35208
< Vary: Accept-Encoding
< Content-Type: text/html; charset=UTF-8
< 
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
  <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  <title>Crypto++ Library 5.6.5 | Free C++ Class Library of Cryptographic Schemes</title>
  <meta name="description" content=
  "free C++ library for cryptography: includes ciphers, message authentication codes, one-way hash functions, public-key cryptosystems, key agreement schemes, and deflate compression">
  <link rel="stylesheet" type="text/css" href="cryptopp.css">
</head>
...

Firefox "The page isn’t redirecting properly" for a Wiki (all other Pages and UAs are OK) [closed]

Published 5 Nov 2016 by jww in Newest questions tagged mediawiki - Webmasters Stack Exchange.

We are having trouble with a website for a free and open source project. The website and its three components are as follows. Its running on a CenOS 7 VM hosted by someone else (PaaS).

The Apache version is Apache/2.4.6 (CentOS). The PHP version is 5.4.16 (cli) (built: Aug 11 2016 21:24:59). The Mediawiki version is 1.26.4.

The main site is OK and can be reached through both cryptopp.com and www.cryptopp.com in all browsers and user agents. The manual is OK and can be reached through both cryptopp.com/docs and www.cryptopp.com/docs in all browsers and user agents.

The wiki is OK under most Browsers and all tools. Safari is OK. Internet Explorer is OK. Chrome is untested because I don't use it. Command line tools like cURL and wget are OK. A trace using wget is below.

The wiki is a problem under Firefox. It cannot be reached at either cryptopp.com/wiki and www.cryptopp.com/wiki in Firefox. Firefox displays an error on both OS X 10.8 and Windows 8. Firefox is fully patched to the platform. The failure is:

enter image description here

We know the problem is due to a recent change to direct all traffic to HTTPS. The relevant addition to httd.conf is below. The change in our policy is due to Chrome's upcoming policy change regarding Security UX indicators.

I know these are crummy questions (none of us are webmasters or admins in our day job)... What is the problem? How do I troubleshoot it? How do I fix it?


wget trace

$ wget http://cryptopp.com/wiki/ 
--2016-11-05 12:53:54--  http://cryptopp.com/wiki/
Resolving cryptopp.com (cryptopp.com)... 192.210.150.121
Connecting to cryptopp.com (cryptopp.com)|192.210.150.121|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://www.cryptopp.com/wiki/ [following]
--2016-11-05 12:53:54--  https://www.cryptopp.com/wiki/
Resolving www.cryptopp.com (www.cryptopp.com)... 192.210.150.121
Connecting to www.cryptopp.com (www.cryptopp.com)|192.210.150.121|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://cryptopp.com/wiki/Main_Page [following]
--2016-11-05 12:53:54--  https://cryptopp.com/wiki/Main_Page
Connecting to cryptopp.com (cryptopp.com)|192.210.150.121|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’

index.html              [ <=>                ]  20.04K  --.-KB/s    in 0.03s   

2016-11-05 12:53:54 (767 KB/s) - ‘index.html’ saved [20520]

Firefox access_log

# tail -16 /var/log/httpd/access_log
<removed irrelevant entries>
71.244.244.203 - - [05/Nov/2016:13:00:52 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "https://www.cryptopp.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0"
71.244.244.203 - - [05/Nov/2016:13:00:52 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "https://www.cryptopp.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0"
71.244.244.203 - - [05/Nov/2016:13:00:53 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "https://www.cryptopp.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0"
71.244.244.203 - - [05/Nov/2016:13:00:53 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "https://www.cryptopp.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0"
71.244.244.203 - - [05/Nov/2016:13:00:53 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "https://www.cryptopp.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0"
71.244.244.203 - - [05/Nov/2016:13:00:53 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "https://www.cryptopp.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0"
71.244.244.203 - - [05/Nov/2016:13:00:53 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "https://www.cryptopp.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0"
71.244.244.203 - - [05/Nov/2016:13:00:53 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "https://www.cryptopp.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0"
71.244.244.203 - - [05/Nov/2016:13:00:53 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "https://www.cryptopp.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0"
71.244.244.203 - - [05/Nov/2016:13:00:54 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "https://www.cryptopp.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0"
71.244.244.203 - - [05/Nov/2016:13:00:54 -0400] "GET /wiki/Main_Page HTTP/1.1" 302 20 "https://www.cryptopp.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:48.0) Gecko/20100101 Firefox/48.0"

httd.conf change

<VirtualHost *:80>
    ServerName www.cryptopp.com
    ServerAlias *.cryptopp.com *.cryptopp.* cryptopp.com

    <IfModule mod_rewrite.c>
        RewriteEngine On
        RewriteCond %{REQUEST_METHOD} ^TRACE
        RewriteRule .* - [F]
        RewriteCond %{REQUEST_METHOD} ^TRACK
        RewriteRule .* - [F]

        #redirect all port 80 traffic to 443
        RewriteCond %{SERVER_PORT} !^443$
        RewriteRule ^/?(.*) https://www.cryptopp.com/$1 [L,R]
    </IfModule>    
</VirtualHost>

<VirtualHost *:443>
    ServerName www.cryptopp.com
    ServerAlias *.cryptopp.com *.cryptopp.* cryptopp.com
</VirtualHost>

Wikidata Map Animations

Published 4 Nov 2016 by addshore in Addshore.

Back in 2013 maps were generated almost daily to track the immediate usage of the then new coordinate location within the project. An animation was then created by Denny & Lydia showing the amazing growth which can be seen on commons here. Recently we found the original images used to make this animation starting in June 2013 and extending to September 2013, and to celebrate the fourth birthday of Wikidata we decided to make a few new animations.

The above animation contains images from 2013 (June to September) and then 2014 onwards.

This gap could be what resulted in the visible jump in brightness of the gif. This jump could also be explained by different render settings used to create the map, at some point we should go back and generate standardized images for every week / months that coordinates have existed on Wikidata.

The whole gif and the individual halves can all be found on commons under CC0:

The animations were generated directly from png files using the following command:

convert -delay 10 -loop 0 *.png output.gif

These animations use the “small” images generated in previous posts such as Wikidata Map October 2016.


A Simple Request: VLC.js

Published 1 Nov 2016 by Jason Scott in ASCII by Jason Scott.

Almost five years ago to today, I made a simple proposal to the world: Port MAME/MESS to Javascript.

That happened.

I mean, it cost a dozen people hundreds of hours of their lives…. and there were tears, rage, crisis, drama, and broken hearts and feelings… but it did happen, and the elation and the world we live in now is quite amazing, with instantaneous emulated programs in the browser. And it’s gotten boring for people who know about it, except when they haven’t heard about it until now.

By the way: work continues earnestly on what was called JSMESS and is now called The Emularity. We’re doing experiments with putting it in WebAssembly and refining a bunch of UI concerns and generally making it better, faster, cooler with each iteration. Get involved – come to #jsmess on EFNet or contact me with questions.

In celebration of the five years, I’d like to suggest a new project, one of several candidates I’ve weighed but which I think has the best combination of effort to absolute game-changer in the world.

vlc-media-player-dowload-for-windows

Hey, come back!

It is my belief that a Javascript (later WebAssembly) port of VLC, the VideoLan Player, will fundamentally change our relationship to a mass of materials and files out there, ones which are played, viewed, or accessed. Just like we had a lot of software locked away in static formats that required extensive steps to even view or understand, so too do we have formats beyond the “usual” that are also frozen into a multi-step process. Making these instantaneously function in the browser, all browsers, would be a revolution.

A quick glance at the features list of VLC shows how many variant formats it handles, from audio and sound files through to encapsulations like DVD and VCDs. Files that now rest as hunks of ISOs and .ZIP files that could be turned into living, participatory parts of the online conversation. Also, formats like .MOD and .XM (trust me) would live again effectively.

Also, VLC has weathered years and years of existence, and the additional use case for it would help people contribute to it, much like there’s been some improvements in MAME/MESS over time as folks who normally didn’t dip in there added suggestions or feedback to make the project better in pretty obscure realms.

I firmly believe that this project, fundamentally, would change the relationship of audio/video to the web. 

I’ll write more about this in coming months, I’m sure, but if you’re interested, stop by #vlcjs on EFnet, or ping me on twitter at @textfiles, or write to me at vlcjs@textfiles.com with your thoughts and feedback.

See you.

 


Digital Collecting – Exciting and Challenging times

Published 31 Oct 2016 by slwacns in State Library of Western Australia Blog.

Dear Reader, this post does not (yet) have a happy ending, but rather it’s a snapshot of some of the challenges we’re facing, and might provide some insight into how we handle content (especially the digital stuff).  I’m also hoping it’ll start you thinking about how you might handle/organise your own personal collections.  If it does, please let me know by adding a comment below.  Now enough from me, and on with the story…

 trolley

Not so long ago we received a trolley full of files from a private organisation.  This is not an unusual scenario, as we often collect from Western Australian organisations, and it is part of the job of our Collection Liaison team to evaluate and respond to offers of content.  The files we received included the usual range of hardcopy content – Annual Reports, promotional publications, internal memos and the like… and a hard drive.

Not being totally sure what was on the hard drive, we thought we’d best take a look.  We used our write blocker (a device to stop any changes happening on the hard drive), and accessed the drive.  Well, we tried to… Challenge 1 was hit – we couldn’t open the drive.  A bit of investigation later, (and with the use of a Mac), the drive was accessed.  Funny to think at this point how used we get to our own ‘standard’ environments. If you are the only person in your family to use a Mac, and your drives are Mac formatted, how are you going to share files with Windows users?

Once we could get to the content, we carefully copied the contents onto a working directory on our storage system.  (Carefully for us means programmatically checking files we were transferring, and re-checking them once copied to ensure the files weren’t corrupted or changed during the transfer process).  At the same time, our program created a list of contents of the drive.  There were a mere 15,000 files.  Challenge 2 started to emerge… fifteen thousand is a big number of files!  How many files would you have on your device(s)?  If you gave them all to someone, would they freak out, or would they know which ones were important?

[Enter some investigation into the content of the files].  Hmmm – looks like most things are well organised – I can see that a couple of directories are labelled by year (‘2014’, ‘2015’, ‘2016’), and there are some additional ‘Project’ folders.  Great!  This is really quite OK.  What’s more (following our guidelines), the donor has provided us with details of each section of the collection – including a (necessarily broad) description of what’s on the drive – that’ll be really helpful when our cataloguers need to describe the contents. Challenge 4 – Identifying the contents, is (at a high level anyway) looking doable.  Oops – hold that thought – there’s a directory of files called ‘Transferred’ – What does that mean? Hmmm…

 

Enough for now – stay tuned to updates on the processing of this collection, and feel free to get in touch.  Comments below, or if you think we may have something that is collectable, start at this web page:http://slwa.wa.gov.au/for/donations


Filed under: Uncategorized

Manually insert text into existing MediaWiki table row?

Published 30 Oct 2016 by jww in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I'm trying to update a page for a MediaWiki database running MW version 1.26.4. The MediaWiki is currenty suffering unexplained Internal Server Errors, so I am trying to perform an end-around by updating the database directly.

I logged into the database with the proper credentials. I dumped the table of interest and I see the row I want to update:

MariaDB [my_wiki]> select * from wikicryptopp_page;
+---------+----------------+---------------------------------------------------------------------------+-------------------+------------------+-------------+--------------------+----------------+-------------+----------+--------------------+--------------------+-----------+
| page_id | page_namespace | page_title                                                                | page_restrictions | page_is_redirect | page_is_new | page_random        | page_touched   | page_latest | page_len | page_content_model | page_links_updated | page_lang |
+---------+----------------+---------------------------------------------------------------------------+-------------------+------------------+-------------+--------------------+----------------+-------------+----------+--------------------+--------------------+-----------+
|       1 |              0 | Main_Page                                                                 |                   |                0 |           0 |     0.161024148737 | 20161011215919 |       13853 |     3571 | wikitext           | 20161011215919     | NULL      |
...
|    3720 |              0 | GNUmakefile                                                               |                   |                0 |           0 |     0.792691625226 | 20161030095525 |       13941 |    36528 | wikitext           | 20161030095525     | NULL      |
...

I know exactly where the insertion should occur, and I have the text I want to insert. The Page Title is GNUmakefile, and the Page ID is 3720.

The text is large at 36+ KB, and its sitting on the filesystem in a text file. How do I manually insert the text into existing table row?


How to log-in with more rights than Admin or Bureaucrat?

Published 30 Oct 2016 by jww in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I'm having a heck of a time with MediaWiki and an Internal Server Error. I'd like to log-in with more privileges than afforded by Admin and Bureaucrat in hopes of actually being able to save a page.

I am an admin on the VM that hosts the wiki. I have all the usernames and passwords at my disposal. I tried logging in with the MediaWiki user and password from LocalSettings.php but the log-in failed.

Is it possible to acquire more privileges than provided by Admin or Bureaucrat? If so, how do I log-in with more rights than Admin or Bureaucrat?


Character set 'utf-8' is not a compiled character set and is not specified in the '/usr/share/mysql/charsets/Index.xml' file

Published 28 Oct 2016 by jww in Newest questions tagged mediawiki - Webmasters Stack Exchange.

We are trying to upgrade our MediaWiki software. According to Manual:Upgrading -> UPGRADE -> Manual:Backing_up_a_wiki, we are supposed to backup the database with:

mysqldump -h hostname -u userid -p --default-character-set=whatever dbname > backup.sql

When we run the command with our parameters and --default-character-set=utf-8:

$ sudo mysqldump -h localhost -u XXX -p YYY --default-character-set=utf-8 ZZZ > 
backup.sql
mysqldump: Character set 'utf-8' is not a compiled character set and is not spec
ified in the '/usr/share/mysql/charsets/Index.xml' file

Checking Index.xml appears to show utf-8 is available. UTF-8 is specifically called out by Manual:$wgDBTableOptions.

$ cat /usr/share/mysql/charsets/Index.xml | grep -B 3 -i 'utf-8'
...
<charset name="utf8">
  <family>Unicode</family>
  <description>UTF-8 Unicode</description>
  <alias>utf-8</alias>
...

We tried both UTF-8 and utf-8 as specified by Manual:$wgDBTableOptions.

I have a couple of questions. First, can we omit --default-character-set since its not working as expected? Second, if we have to use --default-character-set, then what is used to specify UTF-8?


A third, related question is, can we forgo mysqldump all-together by taking the wiki and database offline and then making a physical copy of the database? I am happy to make a copy of the physical database for a restore; and I really don't care much for using tools that cause more trouble than they solve.

If the third item is a viable option, then what is the physical database file that needs to be copied?


Wikidata Map October 2016

Published 28 Oct 2016 by addshore in Addshore.

I has been another 5 months since my last post about the Wikidata maps, and again some areas of the world have lit up. Since my last post at least 9 noticeable areas have appeared with many new items containing coordinate locations. These include Afghanistan, Angola, Bosnia & Herzegovina, Burundi, Lebanon, Lithuania, Macedonia, South Sudan and Syria.

The difference map below was generated using Resemble.js. The pink areas show areas of difference between the two maps from April and October 2016.

Who caused the additions?

To work out what items exist in the areas that have a large amount of change the Wikidata query service can be used. I adapted a simple SPARQL query to show the items within a radius of the centre of each area of increase. For example Afghanistan used the following query:

#defaultView:Map
 SELECT ?place ?placeLabel ?location ?instanceLabel
WHERE
{
  wd:Q889 wdt:P625 ?loc . 
  SERVICE wikibase:around { 
      ?place wdt:P625 ?location . 
      bd:serviceParam wikibase:center ?loc . 
      bd:serviceParam wikibase:radius "100" . 
  } 
  OPTIONAL {    ?place wdt:P31 ?instance  }
  SERVICE wikibase:label { bd:serviceParam wikibase:language "en" }
  BIND(geof:distance(?loc, ?location) as ?dist) 
} ORDER BY ?dist


The query can be see running here and above. The items can then directly be clicked on, the history loaded.

The individual edits that added the coordinates can easily be spotted.

Of course this could also be done using a script following roughly the same process.

It looks like Reinheitsgebot (Magnus Manske) can be attributed to many of the areas of mass increase due to a bot run in April 2016. It looks like KrBot can be attributed to many of the coordinates in Lithuania due to a bot run in May 2016.

October 2016 maps

The October 2016 maps can be found on commons:

Labs project

I have given the ‘Wikidata Analysis’ tool a speedy reboot over the past weeks and generated many maps for may old dumps that are not currently on Wikimedia Commons.

The tool now contains a collection of date stamp directories which contain the data generated by the Java dump scanning tool as well ad the images that are then generated from that data using a Python script.


MediaWiki's VisualEditor component Parsoid not working after switching php7.0 to php5.7

Published 27 Oct 2016 by Dávid Kakaš in Newest questions tagged mediawiki - Ask Ubuntu.

I would like to ask you for your help with:

Because of forum CMS phpBB is not currently supporting >= php7.0 I had to switch to php5.6 on my Ubuntu16.04 LTS server. So installed php5.6 files from ppa:ondrej/php and by :

sudo a2dismod php7.0 ; sudo a2enmod php5.6 ; sudo service apache2 restart
sudo ln -sfn /usr/bin/php5.6 /etc/alternatives/php

... I switched to php5.6.

Unfortunately, this caused my MediaWiki's VisualEditor stop working. I made the MediaWiki plug-in talk to parsoid server before switching php and everything was working as expected. Also, when I switched back to php7.0 using:

sudo a2dismod php5.6 ; sudo a2enmod php7.0 ; sudo service apache2 restart sudo ln -sfn /usr/bin/php7.0 /etc/alternatives/php

... wiki is working fine once again, however posts with phpBB functionalities like phpBBCodes and tags are failing to be submitted. Well php7.0 version is unsupported so I cannot complain, so I am trying to make Parsoid work with php5.6 (which should be supported).

Error displayed when:

Other error (posible) simptoms:

[warning] [{MY_PARSOID_CONF_PREFIX}/Hlavná_stránka] non-200 response: 401 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>401 Unauthorized</title> </head><body> <h1>Unauthorized</h1> <p>This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.</p> <hr> <address>Apache/2.4.18 (Ubuntu) Server at beta.abs4data.com Port 443</address> </body></html>

... however, now I dont get any warnings in the log! Even when performing "sudo service parsoid status" it shows "/bin/sh -c /usr/bin/nodejs /usr/lib/parsoid/src/bin/server.js -c /etc/mediawiki/parsoid/server.js -c /etc/mediawiki/parsoid/settings.js >> /var/log/parsoid/parsoid.log 2>&1" which as I hope means it is outputing error measseages to the log.

I tried:

Possible Cause:

What do you think? Any suggestion how to solve or further test this problem?

P.S. Sorry for badly formated code in question, but it somehow broke ... seems I am the problem after all :-D


Droplet Tagging: Organize Your Infrastructure

Published 25 Oct 2016 by DigitalOcean in DigitalOcean Blog.

At DigitalOcean, we are on a mission to make managing production applications simple. Today, we are officially announcing the addition of Droplet tags to make it even easier to work with large-scale production applications.

Last fall, we quietly launched tagging and management of resources via our public API. Since then, over 94,000 Droplets have been tagged including use cases like:

As developers ourselves, we know how important it is to stay organized when working on and managing applications. Tags are a simple and powerful way to do this.

How Do You Use Tags?

When we released tagging via the API, we received a lot of fantastic feedback. It was exciting to see our community embrace a feature to this extent, and it proved that we needed to add tags to our Cloud control panel too.

We've added tags to all Droplet-related views, like the main Droplets page, in order to make managing your Droplets and tags simpler from wherever you are - Cloud control panel, Metadata Service, and API.

Control panel

We also created a new tag-only view, which allows you to see all Droplets with a given tag. Here, you can see how our team groups our production Droplets by tag:

Control panel filtered by tag

For more detail on how to use tags via the control panel, check out our tagging tutorial on our Community Site.

What Can You Use Tags For?

Managing Resource Ownership

A simple tag like team:data or team:widget makes it easy to know exactly who is responsible for a given set of Droplets. For example, different teams in a company may share a single DigitalOcean Team, and can use tags to track their resource usage separately. Engineers on an on-call rotation, an ops-team, a finance team, or anyone simply debugging a problem can benefit from these kinds of tags as well.

Monitoring and Automation

Knowing the importance of a given Droplet to the healthy operation of a product is an essential part of ensuring the reliability of your system, and tagging your Droplets with env:production or env:dev can help facilitate this.

For example, if your alerting infrastructure is tag-aware, rules can be made less sensitive to increased load or memory usage on a staging or development server than on production servers. If your infrastructure management system is sufficiently mature, you may be able to self-heal by scaling your application servers automatically.

Similarly, with Prometheus' file-based service discovery and regular calls to the DigitalOcean API (e.g., by a cronjob), you can dynamically configure metrics based on tags. You can fine tune parameters like scrape interval, evaluation interval, and any external labels you want to apply — which may be tags themselves.

Logging and Data Retention Policies

Logging and metric data is invaluable, especially during outages, but storing that data can be costly on high-traffic systems. Tagging resources and including those tags in your structured logs can be used to dictate log retention policies. This can help optimize disk usage to ensure critical infrastructure has the most log retention while test servers get little or none. Systems such as RSyslog can apply rules based on JSON-structured logs in CEE format.

Deployments and Infrastructure Management

A common strategy for testing and rolling out deployments is to use blue/green deployments. Implementing a blue/green deployment becomes easy with tags; simply use two tags, blue and green, to track which Droplets are in which set, then use the API to trigger the promotion (by switching the traffic direction later, e.g. by updating a Floating IP, load balancer configuration, or DNS record).

Infrastructure management is an art in and of itself. Recently, our own Tommy Murphy contributed support for DigitalOcean's tags to HashiCorp's Terraform infrastructure automation platform. This has been used to build lightweight firewall management tooling (GitHub) to ensure that hosts with a given tag can pass traffic but will drop traffic from any other host.

What's Coming Up Next?

Being able to tag your Droplets is only the beginning. We know that Block Storage, Floating IPs, DNS records, and other resources are all critical parts of your production infrastructure too. In order to make your deployment, monitoring, and development infrastructure simpler to manage, we're working on letting you manage entire groups of resources via tags over the coming months.

Conclusion

Thank you to everyone who has used tags and provided feedback. We hope these improvements help make it a little easier for you to build and ship great things. Please keep the feedback coming. How do you use tagging to manage your infrastructure? We would love to hear from you!


Working the polls

Published 19 Oct 2016 by legoktm in The Lego Mirror.

After being generally frustrated by this election cycle and wanting to contribute to make it less so, I decided to sign up to work at the polls this year, and help facilitate the election. Yesterday, we had election officer training by the Santa Clara County Registrar of Voter's office. It was pretty fascinating to me given that I've only ever voted by mail, and haven't been inside a physical polling place in years. But the biggest takeaway I had, was that California goes to extraordinary lengths to ensure that everyone can vote. There's basically no situation in which someone who claims they are eligible to vote is denied being able to vote. Sure, they end up voting provisionally, but I think that is significantly better than turning them away and telling them they can't vote.


"wiki is currently unable to handle this request" after installing SimpleMathJax on MediaWiki

Published 19 Oct 2016 by hasanghaforian in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I need to show mathematical terms in mediawiki-1.26.2 so I tried to install SimpleMathJax on mediawiki. I followed the described in extension page:

I downloaded SimpleMathJax-master.zip then extract, rename it to SimpleMathJax and move it to extensions directory of mediawiki. I added these lines to LocalSettings.php:

# End of automatically generated settings.
# Add more configuration options below.
require_once "$IP/extensions/SimpleMathJax/SimpleMathJax.php";
$wgSimpleMathJaxSize = 120;

But when I want to browse to the Wiki, I get this error:

wiki is currently unable to handle this request.

Also I tried to replace require_once "$IP/extensions/SimpleMathJax/SimpleMathJax.php"; line with wfLoadExtension( 'SimpleMathJax' ); but problem remains.


MediaWiki foreground not rendering tabs in content section

Published 8 Oct 2016 by Protocol96 in Newest questions tagged mediawiki - Server Fault.

We are having issues getting the foreground or foundation skins in MediaWiki to render any tabs in the content section of our pages. This site is a demo, hosted on GoDaddy, but we have also tried clean installs Fedora locally and Linode.

All the applicable CSS and JS seems to be loading correctly, and there are no obvious errors in the logs. The skin/theme does correctly render the navbar section at the top of the pages. Maybe we are doing something wrong in the syntax or there is another step to enabling the skin/theme we are missing?

Any help would be appreciated.

https://protocol96.com/mw/Main_Page


Google Assistant & Wikipedia

Published 6 Oct 2016 by addshore in Addshore.

googleassistant-wikipedia1The Google Assistant is essentially a chat bot that you can talk too within the new Allo chat app. The assistant is also baked into some new Google hardware, such as the pixel phones. During a quick test of the assistant, I noticed that if you ask it to “tell me an interesting fact” sometimes it will respond with facts from Wikipedia.

As can be seen in the image, when chatting to the bot you can ask for an interesting fact. The bot then responds and a collection of suggested tiles are placed at the bottom of the chat window. One of these tiles suggests looking at the source. Clicking this will prompt you to open https://en.wikipedia.org/wiki/August in a browser or in the Wikipedia app.

Once open a quick scan of the article will reveal:

August is the month with highest birth rate in the United States.


A Sausage Went for a Walk One Day

Published 4 Oct 2016 by carinamm in State Library of Western Australia Blog.

Can cats fly? 
Can a goat be a superhero?
Can a sausage go for a walk? 

sausage
Peter Kendall, Out of the gate marched breakfast,  reproduced in A Sausage Went for a Walk by Ellisha Majid and Peter Kendall, 1991. Published by Fremantle Press. 

In picture books anything is possible, just as anything is possible in the imagination of a child.  The power of picture books to ignite imagination is highlighted in our current exhibition,  A Sausage Went for a Walk One Day – celebrating Western Australian picture books and 40 fabulous years of Fremantle Press

Beginning with the award winning,  A Sausage Went for a Walk  (1991) by Ellisha Majid and Peter Kendall, the exhibition includes artwork drawn from the State Library Williams collection of illustrations, as well as artwork loaned from illustrators.

Readers of picture books usually only see the finished product in the form of the published book. The process of book making is revealed in this exhibition through sketches, storyboards, colour experiments, text revisions, and published artwork.  The artworks in the exhibition reveal surprising insights into how picture books are brought to life. This post will explore five of these ideas.

1. A work in progress
Illustrations from Palo Morgan’s book Cat Balloon highlight how stories often change during the process of illustration.  A closer look at sketches show cat balloon depicted with arms outstretched, and  wings attached to his back.  In the published illustration below Cat Balloon is shown pursuing his dream to fly by other means.

slwa_b4638614_13
Palo Morgan, To sea in a large silver spoon, reproduced in Cat Balloon by Palo Morgan, 1992. Published by Fremantle Press. State Library of Western Australia collection, PWC/253 

2. From big to small 
Picture books are portable art. They are small enough to be held in little hands. To capture detail of shape and form,  many illustrators choose to work with a larger scale. Moira Court’s, Leaping in single bound for the story My Superhero (written by Chris Owen) is more than four times the size of the published book!

slwa_b3302613_22_master
Moira Court, Leaping in a single bound, reproduced in My Superhero by Chris Owen and Moira Court. Published by Fremantle Press, 2012. State Library of Western Australia collection, PWC/218. 

3. Hints of home 
A picture book can be found and read anywhere in the world, and translated into a variety of different languages and formats.  The picture books featured in A Sausage Went for a Walk One Day have all been published in Western Australia, and embedded within them, are connections to place and the daily lives of their creators.

Street scenes of Fremantle in Sonia Martinez illustrations for The World According to Warren (written by Craig Silvey) might be recognisable to visitors.

pwc_115_martinez
Sonia Martinez, And he was never again distracted whilst on duty, reproduced in The World According to Warren by Craig Silvey and Sonia Martinez. Published by Fremantle Press, 2007. State Library of Western Australia collection, PWC/115

The colours and patterns found in Sally Morgan’s illustration, Beneath the stars we all sleep. are inspired by her close observation of the Western Australian landscape, and the inter-connectedness of humans and the natural environment.

weallsleepcropped
Sally Morgan, Beneath the stars we all sleep, reproduced in We All Sleep by Ezekiel Kwaymullina and Sally Morgan. Published by Fremantle Press, 2016.

4. Universal themes 
Picture books succinctly deal with complex themes and messages with global relevance. These range from cultural diversity, social inclusion, environmental concern, and  the impacts of historical events, particularly war and its aftermath. They communicate about human emotions as varied as joy, to loneliness and grief, and themes of family, friends, belonging, and home. They affirm the importance of the imagination , which has the power to unlock dreams and human potential.

theotherbears

Michael Thompson, But we love their food, reproduced in The Other Bears by Michael Thompson. Published by Fremantle Press, 2010.

 

5. Medium and the message
Illustrators carefully select a style and technique which compliments the words. Some styles are detailed, while other styles are more spontaneous and free flowing. Each technique has a different effect on the viewer.  The repetition of shapes and the geometric style of Kyle Hughes-Odgers, as seen in On a Small Island and Ten Tiny Things, draws attention to details in line, pattern, and shape. In contrast, Brian Simmonds’s realism in Lighthouse Girl and Light Horse Boy provokes an emotional response.

on-a-small-island-jpg

Kyle Hughes-Odgers, So many strange buildings, reproduced in On a Small Island by Kyle Hughes-Odgers. Published by Fremantle Press, 2014.  

A Sausage Went for a Walk One Day is presented by Fremantle Press, the State Library of Western Australia and AWESOME Arts. It was launched as part of the 2016 AWESOME Festival and Fremantle Press 40 Year Anniversary celebrations.  It runs until 31 December 2016. For opening hours go to www.slwa.wa.gov.au


Filed under: Children's Literature, community events, Exhibitions, Picture Books, State Library of Western Australia, Uncategorized Tagged: A Sausage Went for a Walk One Day, art, AWESOME Festival, childrens, exhibitions, Fremantle Press, picture books, State Library of Western Australia, State Library WA

Personalized Group Recommendations on Flickr

Published 30 Sep 2016 by Mehul Patel in code.flickr.com.

There are two primary paradigms for the discovery of digital content. First is the search paradigm, in which the user is actively looking for specific content using search terms and filters (e.g., Google web search, Flickr image search, Yelp restaurant search, etc.). Second is a passive approach, in which the user browses content presented to them (e.g., NYTimes news, Flickr Explore, and Twitter trending topics). Personalization benefits both approaches by providing relevant content that is tailored to users’ tastes (e.g., Google News, Netflix homepage, LinkedIn job search, etc.). We believe personalization can improve the user experience at Flickr by guiding both new as well as more experienced members as they explore photography. Today, we’re excited to bring you personalized group recommendations.

Flickr Groups are great for bringing people together around a common theme, be it a style of photography, camera, place, event, topic, or just some fun. Community members join for several reasons—to consume photos, to get feedback, to play games, to get more views, or to start a discussion about photos, cameras, life or the universe. We see value in connecting people with appropriate groups based on their interests. Hence, we decided to start the personalization journey by providing contextually relevant and personalized content that is tuned to each person’s unique taste.

Of course, in order to respect users’ privacy, group recommendations only consider public photos and public groups. Additionally, recommendations are private to the user. In other words, nobody else sees what is recommended to an individual.

In this post we describe how we are improving Flickr’s group recommendations. In particular, we describe how we are replacing a curated, non-personalized, static list of groups with a dynamic group recommendation engine that automatically generates new results based on user interactions to provide personalized recommendations unique to each person. The algorithms and backend systems we are building are broad and applicable to other scenarios, such as photo recommendations, contact recommendations, content discovery, etc.

Group_recommendations2.png

Figure: Personalized group recommendations

Challenges

One challenge of recommendations is determining a user’s interests. These interests could be user-specified, explicit preferences or could be inferred implicitly from their actions, supported by user feedback. For example:

Another challenge of recommendations is figuring out group characteristics. I.e.: what type of group is it? What interests does it serve? What brings Flickr members to this group? We can infer this by analyzing group members, photos posted to the group, discussions and amount of activity in the group.

Once we have figured out user preferences and group characteristics, recommendations essentially becomes a matchmaking process. At a high-level, we want to support 3 use cases:

Collaborative Filtering

One approach to recommender systems is presenting similar content in the current context of actions. For example, Amazon’s “Customers who bought this item also bought” or LinkedIn’s “People also viewed.” Item-based collaborative filtering can be used for computing similar items.

collaborative_filtering

Figure: Collaborative filtering in action

By Moshanin (Own work) [CC BY-SA 3.0] from Wikipedia

Intuitively, two groups are similar if they have the same content or same set of users. We observed that users often post the same photo to multiple groups. So, to begin, we compute group similarity based on a photo’s presence in multiple groups.  

Consider the following sample matrix M(Gi -> Pj) constructed from group photo pools, where 1 means a corresponding group (Gi) contains an image, and empty (0) means a group does not contain the image.

matrix1

From this, we can compute M.M’ (M’s transpose), which gives us the number of common photos between every pair of groups (Gi, Gj):

matrix2

We use modified cosine similarity to compute a similarity score between every pair of groups:

cosinesimilarity

To make this calculation robust, we only consider groups that have a minimum of X photos and keep only strong relationships (i.e., groups that have at least Y common photos). Finally, we use the similarity scores to come up with the top k-nearest neighbors for each group.

We also compute group similarity based on group membership —i.e., by defining group-user relationship (Gi -> Uj) matrix. It is interesting to note that the results obtained from this relationship are very different compared to (Gi, Pj) matrix. The group-photo relationship tends to capture groups that are similar by content (e.g.,“macro photography”). On the other hand, the group-user relationship gives us groups that the same users have joined but are possibly about very different topics, thus providing us with a diversity of results. We can extend this approach by computing group similarity using other features and relationships (e.g., autotags of photos to cluster groups by themes, geotags of photos to cluster groups by place, frequency of discussion to cluster groups by interaction model, etc.).

Using this we can easily come up with a list of similar groups (Use Case # 1). We can either merge the results obtained by different similarity relationships into a single result set, or keep them separate to power features like “Other groups similar to this group” and “People who joined this group also joined.”

We can also use the same data for recommending groups to users (Use Case # 2). We can look at all the groups that the user has already joined and recommend groups similar to those.

To come up with a list of relevant groups for a photo (Use Case # 3), we can compute photo similarity either by using a similar approach as above or by using Flickr computer vision models for finding photos similar to the query photo. A simple approach would then be to recommend groups that these similar photos belong to.

Implementation

Due to the massive scale (millions of users x 100k groups) of data, we used Yahoo’s Hadoop Stack to implement the collaborative filtering algorithm. We exploited sparsity of entity-item relationship matrices to come up with a more efficient model of computation and used several optimizations for computational efficiency. We only need to compute the similarity model once every 7 days, since signals change slowly.

architecture_diagram

Figure: Computational architecture

(All logos and icons are trademarks of respective entities)

 

Similarity scores and top k-nearest neighbors for each group are published to Redis for quick lookups needed by the serving layer. Recommendations for each user are computed in real-time when the user visits the groups page. Implementation of the serving layer takes care of a few aspects that are important from usability and performance point-of-view:

Cold Start

The drawback to collaborative filtering is that it cannot offer recommendations to new users who do not have any associations. For these users, we plan to recommend groups from an algorithmically computed list of top/trending groups alongside manual curation. As users interact with the system by joining groups, the recommendations become more personalized.

Measuring Effectiveness

We use qualitative feedback from user studies and alpha group testing to understand user expectation and to guide initial feature design. However, for continued algorithmic improvements, we need an objective quantitative metric. Recommendation results by their very nature are subjective, so measuring effectiveness is tricky. The usual approach taken is to roll out to a random population of users and measure the outcome of interest for the test group as compared to the control group (ref: A/B testing).

We plan to employ this technique and measure user interaction and engagement to keep improving the recommendation algorithms. Additionally, we plan to measure explicit signals such as when users click “Not interested.” This feedback will also be used to fine-tune future recommendations for users.

measuringeffectiveness

Figure: Measuring user engagement

Future Directions

While we’re seeing good initial results, we’d like to continue improving the algorithms to provide better results to the Flickr community. Potential future directions can be classified broadly into 3 buckets: algorithmic improvements, new product use cases, and new recommendation applications.

If you’d like to help, we’re hiring. Check out our jobs page and get in touch.

Product Engineering: Mehul Patel, Chenfan (Frank) Sun,  Chinmay Kini



Ready, Set, Hacktoberfest!

Published 26 Sep 2016 by DigitalOcean in DigitalOcean Blog.

October is a special time for open source enthusiasts, open source beginners, and for us at DigitalOcean: It marks the start of Hacktoberfest, which enters its third year this Saturday, October 1!

What's Hacktoberfest?

Hacktoberfest—in partnership with GitHub—is a month-long celebration of open source software. Maintainers are invited to guide would-be contributors towards issues that will help move the project forward, and contributors get the opportunity to give back to both projects they like and ones they've just discovered. No contribution is too small—bug fixes and documentation updates are valid ways of participating.

Rules and Prizes

To participate, first sign up on the Hacktoberfest site. And if you open up four pull requests between October 1 and October 31, you'll win a free, limited edition Hacktoberfest T-shirt. (Pull requests do not have to be merged and accepted; as long as they've been opened between the very start of October 1 and the very end of October 31, they count towards a free T-shirt.)

Connect with other Hacktoberfest participants (Hacktobefestants?) by using the hashtag, #Hacktoberfest, on your social media platform of choice.

#Hacktoberfest

A photo posted by Coston (@costonperkins) on

What's Different This Year

We wanted to make it easier for contributors to locate projects that needed help, and we also wanted project maintainers to have the ability to highlight issues that were ready to be worked on. To that end, we've introduced project labeling, allowing project maintainers to add a "Hacktoberfest" label to any issues that contributors could start working on. Browse participating projects on GitHub.

We've also put together a helpful list of resources for both project maintainers and contributors on the Hacktoberfest site.

Ready to get started with Hacktoberfest? Sign up to participate today.

Hacktoberfest


The Festival Floppies

Published 22 Sep 2016 by Jason Scott in ASCII by Jason Scott.

In 2009, Josh Miller was walking through the Timonium Hamboree and Computer Festival in Baltimore, Maryland. Among the booths of equipment, sales, and demonstrations, he found a vendor was selling an old collection of 3.5″ floppy disks for DOS and Windows. He bought it, and kept it.

A few years later, he asked me if I wanted them, and I said sure, and he mailed them to me. They fell into the everlasting Project Pile, and waited for my focus and attention.

They looked like this:

cs6rforueaagbcg

I was particularly interested in the floppies that appeared to be someone’s compilation of DOS and Windows programs in the most straightforward form possible – custom laser-printed directories on the labels, and no obvious theme as to why this shareware existed on them. They looked like this, separated out:

cs6reiouaaaamnd

There were other floppies in the collection, as well:

cswfhv2xeaa4hrl

They’d sat around for a few years while I worked on other things, but the time finally came this week to spend some effort to extract data.

There’s debates on how to do this that are both boring and infuriating, and I’ve ended friendships over them, so let me just say that I used a USB 3.5″ floppy drive (still available for cheap on Amazon; please take advantage of that) and a program called WinImage that will pull out a disk image in the form of a .ima file from the floppy drive. Yes, I could do a flux imaging of these disks, but sorry, that’s incredibly insane overkill. These disks contain files put on there by a person and we want those files, along with the accurate creation dates and the filenames and contents. WinImage does it.

Sometimes, the floppies have some errors and require trying over to get the data off them. Sometimes it takes a LOT of tries. If after a mass of tries I am unable to do a full disk scan into a disk image, I try just mounting it as A: in Windows and pulling the files off – they sometimes are just fine but other parts of the disk are dead. I make this a .ZIP file instead of a .IMA file. This is not preferred, but the data gets off in some form.

Some of them (just a handful) were not even up for this – they’re sitting in a small plastic bin and I’ll try some other methods in the future. The ratio of Imaged-ZIPed-Dead were very good, like 40-3-3.

I dumped most of the imaged files (along with the ZIPs) into this item.

This is a useful item if you, yourself, want to download about 100 disk image files and “do stuff” with them. My estimation is that all of you can be transported from the first floor to the penthouse of a skyscraper with 4 elevator trips. Maybe 3. But there you go, folks. They’re dropped there and waiting for you. Internet Archive even has a link that means “give me everything at once“. It’s actually not that big at all, of course – about 260 megabytes, less than half of a standard CD-ROM.

I could do this all day. It’s really easy. It’s also something most people could do, and I would hope that people sitting on top of 3.5” floppies from DOS or Windows machines would be up for paying the money for that cheap USB drive and something like WinImage and keep making disk images of these, labeling them as best they can.

I think we can do better, though.

The Archive is running the Emularity, which includes a way to run EM-DOSBOX, which can not only play DOS programs but even play Windows 3.11 programs as well.

Therefore, it’s potentially possible for many of these programs, especially ones particularly suited as stand-alone “applications”, to be turned into in-your-browser experiences to try them out. As long as you’re willing to go through them and get them prepped for emulation.

Which I did.

floppo

The Festival Floppies collection is over 500 programs pulled from these floppies that were imaged earlier this week. The only thing they have in common was that they were sitting in a box on a vendor table in Baltimore in 2009, and I thought in a glance they might run and possibly be worth trying out. After I thought this (using a script to present them for consideration), the script did all the work of extracting the files off the original floppy images, putting the programs into an Internet Archive item, and then running a “screen shotgun” I devised with a lot of help a few years back that plays the emulations, takes the “good shots” and makes them part of a slideshow so you can get a rough idea of what you’re looking at.

00_coverscreenshot

You either like the DOS/Windows aesthetic, or you do not. I can’t really argue with you over whatever direction you go – it’s both ugly and brilliant, simple and complex, dated and futuristic. A lot of it depended on the authors and where their sensibilities lay. I will say that once things started moving to Windows, a bunch of things took on a somewhat bland sameness due to the “system calls” for setting up a window, making it clickable, and so on. Sometimes a brave and hearty soul would jazz things up, but they got rarer indeed. On the other hand, we didn’t have 1,000 hobbyist and professional programs re-inventing the wheel, spokes, horse, buggy, stick shift and gumball machine each time, either.

screenshot_05

Just browsing over the images, you probably can see cases where someone put real work into the whole endeavor – if they seem to be nicely arranged words, or have a particular flair with the graphics, you might be able to figure which ones have the actual programming flow and be useful as well. Maybe not a direct indicator, but certainly a flag. It depends on how much you want to crate-dig through these things.

Let’s keep going.

Using a “word cloud” script that showed up as part of an open source package, I rewrote it into something I call a “DOS Cloud”. It goes through these archives of shareware, finds all the textfiles in the .ZIP that came along for the ride (think README.TXT, READ.ME, FILEID.DIZ and so on) and then runs to see what the most frequent one and two word phrases are. This ends up being super informative, or not informative at all, but it’s automatic, and I like automatic. Some examples:

Mad Painterpaint, mad, painter, truck, joystick, drive, collision, press, cloud, recieve, mad painter, dos prompt

Screamer screamer, code, key, screen, program, press, command, memory, installed,activate, code key, memory resident, correct code, key combination, desired code

D4W20timberline, version, game, sinking, destroyer, gaming, smarter, software,popularity, timberline software, windows version, smarter computer, online help, high score

Certainly in the last case, those words are much more informative than the name D4W20 (which actually stands for “Destroyer for Windows Version 2.0”), and so the machine won the day. I’ve called this “bored intern” level before and I’d say it’s still true – the intern may be bored, but they never stop doing the process, either. I’m sure there’s some nascent class discussion here, but I’ll say that I don’t entirely think this is work for human beings anymore. It’s just more and more algorithms at this point. Reviews and contextual summaries not discernible from analysis of graphics and text are human work.

For now.

screenshot_00

These programs! There are a lot of them, and a good percentage solve problems we don’t have anymore or use entire other methods to deal with the information. Single-use programs to deal with Y2K issues, view process tasks better, configure your modem, add a DOS interface, or track a pregnancy. Utilities to put the stardate in the task bar, applications around coloring letters, and so it goes. I think the screenshots help make decisions, if you’re one of the people idly browsing these sets and have no personal connection to DOS or Windows 3.1 as a lived experience.

I and others will no doubt write more and more complicated methods for extracting or providing metadata for these items, and work I’m doing in other realms goes along with this nicely. At some point, the entries for each program will have a complication and depth that rivals most anything written about the subjects at the time, when they were the state of the art in computing experience. I know that time is coming, and it will be near-automatic (or heavily machine-assisted) and it will allow these legions of nearly-lost programs to live again as easily as a few mouse clicks.

But then what?

screenshot_03

But Then What is rapidly becoming the greatest percentage of my consideration and thought, far beyond the relatively tiny hurdles we now face in terms of emulation and presentation. It’s just math now with a lot of what’s left (making things look/work better on phones, speeding up the browser interactions, adding support for disk swapping or printer output or other aspects of what made a computer experience lasting to its original users). Math, while difficult, has a way of outing its problems over time. Energy yields results. Processing yields processing.

No, I want to know what’s going to happen beyond this situation, when the phones and browsers can play old everything pretty accurately, enough that you’d “get it” to any reasonable degree playing around with it.

Where do we go from there? What’s going to happen now? This is where I’m kind of floating these days, and there are ridiculously scant answers. It becomes very “journey of the mind” as you shake the trees and only nuts come out.

To be sure, there’s a sliver of interest in what could be called “old games” or “retrogaming” or “remixes/reissues” and so on. It’s pretty much only games, it’s pretty much roughly 100 titles, and it’s stuff that has seeped enough into pop culture or whose parent companies still make enough bank that a profit motive serves to ensure the “IP” will continue to thrive, in some way.

The Gold Fucking Standard is Nintendo, who have successfully moved into such a radical space of “protecting their IP” that they’ve really successfully started moving into wrecking some of the past – people who make “fan remixes” might be up for debate as to whether they should do something with old Nintendo stuff, but laying out threats for people recording how they experienced the games, and for any recording of the games for any purpose… and sending legal threats at anyone and everyone even slightly referencing their old stuff, as a core function.. well, I’m just saying perhaps ol’ Nintendo isn’t doing itself any favors but on the other hand they can apparently be the most history-distorting dicks in this space quadrant and the new games still have people buy them in boatloads. So let’s just set aside the Gold Fucking Standard for a bit when discussing this situation. Nobody even comes close.

There’s other companies sort of taking this hard-line approach: “Atari”, Sega, Capcom, Blizzard… but again, these are game companies markedly defending specific games that in many cases they end up making money on. In some situations, it’s only one or two games they care about and I’m not entirely convinced they even remember they made some of the others. They certainly don’t issue gameplay video takedowns and on the whole, historic overview of the companies thrives in the world.

But what a small keyhole of software history these games are! There’s entire other experiences related to software that are both available, and perhaps even of interest to someone who never saw this stuff the first time around. But that’s kind of an educated guess on my part. I could be entirely wrong on this. I’d like to find out!

Pursuing this line of thought has sent me hurtling into What are even musuems and what are even public spaces and all sorts of more general questions that I have extracted various answers for and which it turns out are kind of turmoil-y. It also has informed me that nobody kind of completely knows but holy shit do people without managerial authority have ideas about it. Reeling it over to the online experience of this offline debated environment just solves some problems (10,000 people look at something with the same closeness and all the time in the world to regard it) and adds others (roving packs of shitty consultant companies doing rough searches on a pocket list of “protected materials” and then sending out form letters towards anything that even roughly matches it, and calling it a ($800) day).

Luckily, I happen to work for an institution that is big on experiments and giving me a laughably long leash, and so the experiment of instant online emulated computer experience lives in a real way and can allow millions of people (it’s been millions, believe it or not) to instantly experience those digital historical items every second of every day.

So even though I don’t have the answers, at all, I am happy that the unanswered portions of the Big Questions haven’t stopped people from deriving a little joy, a little wonder, a little connection to this realm of human creation.

That’s not bad.

screenshot_00-1


DNS inside PHP-FPM chroot jail on OpenBSD 6.0 running nginx 1.10.1, PHP 7.0.8, MariaDB 10.0.25 and MediaWiki 1.27.1

Published 18 Sep 2016 by Till Kraemer in Newest questions tagged mediawiki - Server Fault.

I'm running nginx 1.10.1 on OpenBSD 6.0 with the packages php-7.0.8p0, php-curl-7.0.8p0, php-fastcgi-7.0.8p0, php-gd-7.0.8p0, php-mcrypt-7.0.8p0, php-mysqli-7.0.8p0, mariadb-client-10.0.25v1 and mariadb-server-10.0.25p0v1.

I have several MediaWiki 1.27.1 installations, one pool for images and several language wikis accessing the pool. Each installation has its own virtual subdomain configured in nginx.

php70_fpm runs chrooted, /etc/php-fpm.conf looks like this:

chroot = /path/to/chroot/jail

listen = /path/to/chroot/jail/run/php-fpm.sock

/etc/nginx/nginx/sites-available/en.domain.com looks like this:

fastcgi_pass   unix:run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

/etc/my.cnf looks like this:

port            = 1234
socket          = /path/to/mysql.sock
bind-address    = 127.0.0.1
skip-external-locking
#skip-networking

When I try to fetch image descriptions from pool.domain.com on en.domain.com, I'm getting a "Couldn't resolve host pool.domain.com" error.

As soon as I run php_fpm without chroot, file descriptions are fetched from the pool without any problem.

I don't want to copy stuff from /etc into /path/to/chroot/jail so what can I do? Are there some PHP 7 modules I could use? Do I have to play around with unbound?

Any help is more than welcome!

Thanks and cheers,

Till


Simple MediaWiki backup

Published 18 Sep 2016 by Brian S in Newest questions tagged mediawiki - Server Fault.

I am currently on contract with a small (<250 accounts) municipal water supply company. One of the things I'm doing is rewriting their ten-years-out-of-date procedures manual, and after some discussion with the company's president and with the treasurer, I settled on a localhost MediaWiki install.

The problem I'm currently having is with a backup of the wiki. (The monitor of the laptop currently hosting the wiki began to fail this week, which moved data backup to the front of my priorities.) I can certainly back it up, and I know how to restore it from backup. However, this contracting job is not a permanent placement, and eventually the office manager(s) would be responsible for it. However, they are not especially tech savvy, and the MediaWiki backup instructions involve options like command-line tools, which are not things they are particularly interested in learning.

Is there any way I can simplify the backup & restore process (in particular, the database backup; I am confident the managers can handle files if need be)?

The computer running the localhost wiki is a laptop with Windows 10, running XAMPP (Apache 2.4.17, MySQL 5.0.11, PHP 5.6.21)

(Repost from SO after realizing this question is off-topic there.)


The RevisionSlider

Published 18 Sep 2016 by addshore in Addshore.

The RevisionSlider is an extension for MediaWiki that has just been deployed on all Wikipedias and other Wikimedia websites as a beta feature. The extension was developed by Wikimedia Germany as part of their focus on technical wishes of the German speaking Wikimedia community. This post will look at the RevisionSliders design, development and use so far.

What is the RevisionSlider

Once enabled, the slider appears on the diff comparison page of MediaWiki, where it aims to enable users to more easily find the revision of a page that introduced or removed some text as well as making the navigation of the history of the page easier. Each revision is represented by a vertical bar extending upward from the centre of the slider for revisions that added content and downward from the slider for those that removed content. Two coloured pointers are used to indicate the revisions that are currently being compared, the colour coding matches the colour of the revision changes in the diff view. Each pointer can be moved by dragging to a new revision bar or by clicking on the bar, at this point the diff will be reloaded using ajax for the user to review. For pages with many revisions arrows are enabled at the ends of the slider to move back and forward through revisions. Extra information about the revisions represented by bars is shown in a tooltip on hover.

Deployment & Usage

The RevisionSlider was deployed in stages, first to test sites in mid July 2016, then to the German Wikipedia and a few other sites that have been proactive in requesting the feature in late July 2016, and finally to all Wikimedia sites on 6 September 2016. In the 5 days following the deployment to all sites the number of users using the feature increased from 1739 to 3721 (over double) according to the Grafana dashboard https://grafana.wikimedia.org/dashboard/db/mediawiki-revisionslider. This means the beta feature now has more users than the “Flow on user talk page” feature and will soon overtake the number of users with ORES enabled unless we see a sudden slow down https://grafana.wikimedia.org/dashboard/db/betafeatures.

The wish

The wish that resulted in the creation of the RevisionSlider was wish #15 from the 2015 German Community Technical Wishlist and the Phabricator task can be found at https://phabricator.wikimedia.org/T139619. The wish actually reads (roughly translated) When viewing the diff a section of the version history, especially the editing comments show be show. Lots of discussion follows to establish the actual issue that the community was having with the diff page, and the consensus was it was generally very hard to move from one diff to another. The standard process within MediaWiki requires the user to start from the history page to select a diff. The diff then allows moving forward or backward revision by revision but big jumps are not possible without first navigating back to the history page.

The first test version of the slider was inspired by the user script called RevisionJumper. This script provided a drop down menu in the diff view that provided various options to jump to a version of the page considerably before or after the current shown version. This can be seen in the German example below.

DerHexer (https://commons.wikimedia.org/wiki/File:Gadget-revisionjumper11_de.png), „Gadget-revisionjumper11 de“, https://creativecommons.org/licenses/by-sa/3.0/legalcode

DerHexer (https://commons.wikimedia.org/wiki/File:Gadget-revisionjumper11_de.png), „Gadget-revisionjumper11 de“, https://creativecommons.org/licenses/by-sa/3.0/legalcode

The WMF Communit Tech team worked on a prototype during autumn 2015 which was then picked up by WMDE at the Wikimedia Jerusalem hackathon in 2016 and pushed to fruition.

DannyH (WMF) (https://commons.wikimedia.org/wiki/File:Revslider_screenshot.jpg), „Revslider screenshot“, https://creativecommons.org/licenses/by-sa/4.0/legalcode

DannyH (WMF) (https://commons.wikimedia.org/wiki/File:Revslider_screenshot.jpg), „Revslider screenshot“, https://creativecommons.org/licenses/by-sa/4.0/legalcode

Further development

Links


Why the Apple II ProDOS 2.4 Release is the OS News of the Year

Published 15 Sep 2016 by Jason Scott in ASCII by Jason Scott.

prodos-2-4-splash

In September of 2016, a talented programmer released his own cooked update to a major company’s legacy operating system, purely because it needed to be done. A raft of new features, wrap-in programs, and bugfixes were included in this release, which I stress was done as a hobby project.

The project is understatement itself, simply called Prodos 2.4. It updates ProDOS, the last version of which, 2.0.3, was released in 1993.

You can download it, or boot it in an emulator on the webpage, here.

As an update unto itself, this item is a wonder – compatibility has been repaired for the entire Apple II line, from the first Apple II through to the Apple IIgs, as well as cases of various versions of 6502 CPUs (like the 65C02) or cases where newer cards have been installed in the Apple IIs for USB-connected/emulated drives. Important utilities related to disk transfer, disk inspection, and program selection have joined the image. The footprint is smaller, and it runs faster than its predecessor (a wonder in any case of OS upgrades).

The entire list of improvements, additions and fixes is on the Internet Archive page I put up.

prodos-2-4-bitsy-boot

The reason I call this the most important operating system update of the year is multi-fold.

First, the pure unique experience of a 23-year-gap between upgrades means that you can see a rare example of what happens when a computer environment just sits tight for decades, with many eyes on it and many notes about how the experience can be improved, followed by someone driven enough to go through methodically and implement all those requests. The inclusion of the utilities on the disk means we also have the benefit of all the after-market improvements in functionality that the continuing users of the environment needed, all time-tested, and all wrapped in without disturbing the size of the operating system programs itself. It’s like a gold-star hall of fame of Apple II utilities packed into the OS they were inspired by.

This choreographed waltz of new and old is unique in itself.

Next is that this is an operating system upgrade free of commercial and marketing constraints and drives. Compared with, say, an iOS upgrade that trumpets the addition of a search function or blares out a proud announcement that they broke maps because Google kissed another boy at recess. Or Windows 10, the 1968 Democratic Convention Riot of Operating Systems, which was designed from the ground up to be compatible with a variety of mobile/tablet products that are on the way out, and which were shoved down the throats of current users with a cajoling, insulting methodology with misleading opt-out routes and freakier and freakier fake-countdowns.

The current mainstream OS environment is, frankly, horrifying, and to see a pure note, a trumpet of clear-minded attention to efficiency, functionality and improvement, stands in testament to the fact that it is still possible to achieve this, albeit a smaller, slower-moving target. Either way, it’s an inspiration.

prodos-2-4-bitsy-bye

Last of all, this upgrade is a valentine not just to the community who makes use of this platform, but to the ideas of hacker improvement calling back decades before 1993. The amount of people this upgrade benefits is relatively small in the world – the number of folks still using Apple IIs is tiny enough that nearly everybody doing so either knows each other, or knows someone who knows everyone else. It is not a route to fame, or a resume point to get snapped up by a start-up, or a game of one-upsmanship shoddily slapped together to prove a point or drop a “beta” onto the end as a fig leaf against what could best be called a lab experiment gone off in the fridge. It is done for the sake of what it is – a tool that has been polished and made anew, so the near-extinct audience for it works to the best of their ability with a machine that, itself, is thought of as the last mass-marketed computer designed by a single individual.

That’s a very special day indeed, and I doubt the remainder of 2016 will top it, any more than I think the first 9 months have.

Thanks to John Brooks for the inspiration this release provides. 


Support RAM-Intensive Workloads with High Memory Droplets

Published 12 Sep 2016 by DigitalOcean in DigitalOcean Blog.

At DigitalOcean, we aim to make it simple and intuitive for developers to build and scale their infrastructure, from an application running on a single Droplet to a highly distributed service running across thousands of Droplets. As applications grow and become more specialized, so too do the configurations needed to run them effectively. Recently, with the launch of Block Storage, we made it easy to scale storage independently from compute at a lower price point than before. Today, we're doing something similar for RAM with the release of High Memory Droplet plans.

Standard Droplets offer a great balance of RAM, CPU, and storage for most general use-cases. Our new High Memory Droplets are optimized for RAM-intensive use-cases such as high-performance databases, in-memory caches like Redis or Memcache, or search indexes.

High Memory Droplet plans start with 16GB and scale up to 224GB of RAM with smaller ratios of local storage and CPU relative to Standard Plans. They are priced 25% lower than our Standard Plans on a per-gigabyte of RAM basis. Find all the details in the chart below and on our pricing page.

Pricing chart

We're actively looking at ways to support more specialized workloads and provide a platform that enables developers to tailor their environment to their applications' needs. We'd love to hear how we can better support your use-case. Let us know in the comments or over on our UserVoice.


GitHub service to deploy via git-mediawiki

Published 10 Sep 2016 by user48147 in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I've been helping write documentation and manage the website for SuperTuxKart, an open-source racing game. The website uses MediaWiki, but we've discussed things and decided after our switch away from SourceForge hosting not to allow free account creation. However, this left us in a dilemma as to how to allow contributions to the wiki while avoiding the spam accounts that plagued the previous one.

We decided that allowing pull requests to submit content to GitHub, then deploy it to MediaWiki would work well. After some research and experimenting, I created a semi-working shell script that uses git-mediawiki to

  1. Clone the wiki
  2. Push the wiki to GitHub
  3. Fetch and merge changes from the wiki
  4. Fetch and merge changes from GitHub (though the wiki has priority in case of a merge conflict)
  5. Push to the wiki and to GitHub.

What I am looking for is a GitHub webhook service to run this script regularly (e.g. every 15 minutes) and whenever there is a commit to GitHub. It also needs some method of write access to the git repository without using my own credentials. I can't just have a script git pull updates to the server because MediaWiki pages can't be read from a normal git repository; they must be in a database.

The content of my script is below:

#!/bin/bash
#
# Auto sync script for the SuperTuxKart wiki

# Set up repo if not already done
if ! [ -d "supertuxkart.net" ]
then
    echo "Setting up repository..."

    git clone --origin wiki mediawiki::https://supertuxkart.net
    cd "supertuxkart.net"
    git remote add github https://github.com/MTres19/supertuxkart.net.git
    git push github master
fi


cd "supertuxkart.net"

git pull --rebase wiki
git pull --rebase -s recursive -X ours github master

git push wiki master
git push github master

Who’s Going to be the Hip Hop Hero

Published 8 Sep 2016 by Jason Scott in ASCII by Jason Scott.

People often ask me if there’s a way they can help. I think I have something.

So, the Internet Archive has had a wild hit on its hand with the Hip Hop Mixtapes collection, which I’ve been culling from multiple sources and then shoving into the Archive’s drives through a series of increasingly complicated scripts. When I run my set of scripts, they do a good job of yanking the latest and greatest from a selection of sources, doing all the cleanup work, verifying the new mixtapes aren’t already in the collection, and then uploading them. From there, the Archive’s processes do the work, and then we have ourselves the latest tapes available to the world.

Since I see some of these tapes get thousands of listens within hours of being added, I know this is something people want. So, it’s a success all around.

mixtape

With success, of course, comes the two flipside factors: My own interest in seeing the collection improved and expanded, and the complaints from people who know about this subject finding shortcomings in every little thing.

There is a grand complaint that this collection currently focuses on mixtapes from 2000 onwards (and really, 2005 onwards). Guilty. That’s what’s easiest to find. Let’s set that one aside for a moment, as I’ve got several endeavors to improve that.

What I need help with is that there are a mass of mixtapes that quickly fell off the radar in terms of being easily downloadable and I need someone to spend time grabbing them for the collection.

While impressive, the 8,000 tapes up on the archive are actually the ones that were able to be grabbed by scripts, without any hangups, like the tapes falling out of favor or the sites they were offering going down. If you use the global list I have, the total amount of tapes could be as high as 20,000.

Again, it’s a shame that a lot of pre-2000 mixtapes haven’t yet fallen into my lap, but it’s really a shame that mixtapes that existed, were uploaded to the internet, and were readily available just a couple years ago, have faded down into obscurity. I’d like someone (or a coordinated group of someones) help grab those disparate and at-risk mixtapes to get into the collection.

I have information on all these missing tapes – the song titles, the artist information, and even information on mp3 size and what was in the original package. I’ve gone out there and tried to do this work, and I can do it, but it’s not a good use of my time – I have a lot of things I have to do and dedicating my efforts in this particular direction means a lot of other items will suffer.

So I’m reaching out to you. Hit me up at mixtapes@textfiles.com and help me build a set of people who are grabbing this body of work before it falls into darkness.

Thanks.


php 5.4 on CentOS7

Published 7 Sep 2016 by user374636 in Newest questions tagged mediawiki - Server Fault.

I am trying to install MediaWiki 1.27 on CentOS7.2. CentOS7.2 comes with php 5.4. However, at least 5.5.9 is required for MediaWiki 1.27.

I have installed and enabled rh-php56 from SCL repo which installed php5.6 in parallel with CentOS stock php5.4.

Unfortunately, MediaWiki still gives me an error that I am running php5.4. Is there a way I can point MediaWiki to start using the newer php5.6 instead? Or am I better off replacing the stock php5.4 with php5.6 from Remi's repository?

Thank you!


Mediawiki LDAP setup issues

Published 7 Sep 2016 by justin in Newest questions tagged mediawiki - Server Fault.

I have Mediawiki setup on a fedora machine and am attempting to get it working with our AD credentials. It is successfully connecting to our AD server and you can log into mediawiki fine with them. However now I am trying to restrict it so that only our IT department users can logon. I cant seem to get the setup correct though, the relevant section to my LocalSettings file is below:

require_once("/directo/LdapAuthentication.php");
$wgAuth = new LdapAuthenticationPlugin();
$wgLDAPDomainNames = array("MYDOMAIN");
$wgLDAPServerNames = array("MYDOMAIN" => "DOMAINIP");
$wgLDAPSearchStrings = array("MYDOMAIN" => "MYDOMAIN\\USER-NAME);
$wgLDAPEncryptionType = array("MYDOMAIN" => "ssl");

$wgLDAPBaseDNs = array("MYDOMAIN" => "dc=MYDOMAIN","dc=com");
$wgLDAPSearchAttributes = array("MYDOMAIN"=>"sAMAccountName");
$wgLDAPRetrievePrefs = array("MYDOMAIN" =>true);
$wgLDAPPreferences = array("MYDOMAIN" =>array('email' => 'mail','realname'=>'displayname'));
$wgLDAPDebug =3;
$wgLDAPExceptionDetails = true;

$wgLDAPRequiredGroups = array("MYDOMAIN" => array("OU=Users,OU=IT,OU=Admin,DC=MYDOMAIN,DC=com"));

If I remove that last line about required groups i can log in fine. Our setup in AD for folders is as follows from top to bottom MYDOMAIN-> Admin -> IT ->Users ->John Doe. But like i said if i implement that last line no one can log in to our mediawiki.


A Crash of Rhinos in Wanneroo

Published 6 Sep 2016 by carinamm in State Library of Western Australia Blog.

With a flamboyance of flamingos, a murder of crows,  a band of gorillas and a parliament of owls, Patricia Mullins A Crash of Rhinos is a picture book which delights the ears and the eyes. Marvel at the original illustrations and sketches currently on display at the Wanneroo Gallery Library and Cultural Centre.

The energetic illustrations and and clever use of collective nouns in A Crash of Rhinos entertains and amuses readers of all ages.  Patricia Mullins unique illustrative style involves collage and layering of coloured tissue paper, with pen and ink drawings, to build up the action in each of her scenes.

pwc_161_mullins

A band of gorillas, 2010, Patricia Mullins, State Library of Western Australia, PWC161 

The exhibition at Wanneroo Gallery marks the first time the complete collection from the book is displayed outside of the State Library of Western Australia.  Acquired in 2011 for the State Library’s collection, it includes original illustrations, preliminary sketches, story boards, and working notes, which provide a unique insight into Patricia Mullins creative process.

One of Patricia Mullins motivations for writing and illustrating is to share her love of language through her stories.

“I’d love them (children) to learn about language through just discovering words, through making up their own words, through understanding that it’s easy and that it can be fun. It’s not about sitting down and learning ‘this is a collective noun’ – it’s about how to use that language…thinking about what language is.” – Patricia Mullins

pwc_169_mullins

A platter of platypuses, 2010, Patricia Mullins, State Library of Western Australia collection, PWC/169 

Patricia Mullins has authored and illustrated a number of picture books including Hattie and Fox (1986), Crocodile Beat (1988), Dinosaur Encore (1992) and Lightening Jack (2012).  A Crash of Rhinos  published by ABC Books by was awarded the Notable Book (Picture Book of the Year) in the Children’s Book Council of Australia Awards, 2011.

Visitors to the exhibition  are invited to take part in a series of free activities and art workshops.

A Crash of Rhinos is on is on display at Wanneroo Gallery Library and Cultural Centre until 12 October 2016. For opening hours and further information visit: wanneroo.wa.gov.au


Filed under: Children's Literature, community events, Exhibitions, Libraries WA, Public Libraries Tagged: A Crash of Rhinos, children, city of wanneroo, collective nouns, Dr Peter Williams Collection, illustration, Patricia Mullins, picture books, State Library collections, wanneroo gallery library and cultural centre
crash-making-of-v3

Introducing Hatch (Beta)

Published 6 Sep 2016 by DigitalOcean in DigitalOcean Blog.

We're excited to launch Hatch (currently in beta), an online incubator program designed to help and support startups. Infrastructure can be one of the largest expenses facing these companies as they begin to scale. With Hatch, startups can receive access to both DigitalOcean credit and a range of other resources like 1-on-1 technical consultations.

Our goal with Hatch is to give back to the startup ecosystem and provide support to founders around the world so they can focus on building their businesses and not worry about their infrastructure. Having come through the Techstars program, we know just how valuable this support network can be.

The Hatch program includes a range of perks for startups to get started, including 12 months of DigitalOcean credit up to $100,000 (actual amount varies by partner organization). The program also offers various support services such as 1-on-1 technical consultations, access to mentorship opportunities, solutions engineering, and priority support. We're looking to go beyond just offering infrastructure credits. We want to provide founders with an educational and networking experience that will add tremendous value to their startup for the long term.

Is my startup eligible?

Starting now, we are piloting the program to a small group of startups. While in beta, we'll be working to refine the offering and eligibility criteria for future bootstrapped and funded startups who apply.

As of today (September 7, 2016), here are the Hatch eligibility requirements for startups:

You can apply to Hatch by visiting digitalocean.com/hatch and completing the online application. Want to learn more? Read the FAQ.

Want to become a partner?

We're currently adding over a hundred accelerators, investors, and partners to introduce startups around the world to the Hatch community. If you're interested in becoming a portfolio partner of Hatch, you can apply here.

Is your startup eligible and do you plan on applying? We'd love to hear from you! Reach out to us on Twitter or use the #hatchyouridea hashtag to tell us what your startup is all about.


Using Vault as a Certificate Authority for Kubernetes

Published 5 Sep 2016 by DigitalOcean in DigitalOcean Blog.

The Delivery team at DigitalOcean is tasked to make shipping internal services quick and easy. In December of 2015, we set out to design and implement a platform built on top of Kubernetes. We wanted to follow the best practices for securing our cluster from the start, which included enabling mutual TLS authentication between all etcd and Kubernetes components.

However, this is easier said than done. DigitalOcean currently has 12 datacenters in 3 continents. We needed to deploy at least one Kubernetes cluster to each datacenter, but setting up the certificates for even a single Kubernetes cluster is a significant undertaking, not to mention dealing with certificate renewal and revocation for every datacenter.

So, before we started expanding the number of clusters, we set out to automate all certificate management using Hashicorp's Vault. In this post, we'll go over the details of how we designed and implemented our certificate authority (CA).

Planning

We found it helpful to look at all of the communication paths before designing the structure of our certificate authority.

communication paths diagram

All Kubernetes operations flow through the kube-apiserver and persist in the etcd datastore. etcd nodes should only accept communication from their peers and the API server. The kubelets or other clients must not be able to communicate with etcd directly. Otherwise, the kube-apiserver's access controls could be circumvented. We also need to ensure that consumers of the Kubernetes API are given an identity (a client certificate) to authenticate to kube-apiserver.

With that information, we decided to create 2 certificate authorities per cluster. The first would be used to issue etcd related certificates (given to each etcd node and the kube-apiserver). The second certificate authority would be for Kubernetes, issuing the kube-apiserver and the other Kubernetes components their certificates. The diagram above shows the communications that use the etcd CA in dashed lines and the Kubernetes CA in solid lines.

With the design finalized, we could move on to implementation. First, we created the CAs and configured the roles to issue certificates. We then configured vault policies to control access to CA roles and created authentication tokens with the necessary policies. Finally, we used the tokens to pull the certificates for each service.

Creating the CAs

We wrote a script that bootstraps the CAs in Vault required for each new Kubernetes cluster. This script mounts new pki backends to cluster-unique paths and generates a 10 year root certificate for each pki backend.

vault mount -path $CLUSTER_ID/pki/$COMPONENT pki
vault mount-tune -max-lease-ttl=87600h $CLUSTER_ID/pki/etcd
vault write $CLUSTER_ID/pki/$COMPONENT/root/generate/internal \
common_name=$CLUSTER_ID/pki/$COMPONENT ttl=87600h

In Kubernetes, it is possible to use the Common Name (CN) field of client certificates as their user name. We leveraged this by creating different roles for each set of CN certificate requests:

vault write $CLUSTER_ID/pki/etcd/roles/member \
    allow_any_name=true \
    max_ttl="720h"

The role above, under the cluster's etcd CA, can create a 30 day cert for any CN. The role below, under the Kubernetes CA, can only create a certificate with the CN of "kubelet".

vault write $CLUSTER_ID/pki/k8s/roles/kubelet \
    allowed_domains="kubelet" \
    allow_bare_domains=true \
    allow_subdomains=false \
    max_ttl="720h"

We can create roles that are limited to individual CNs, such as "kube-proxy" or "kube-scheduler", for each component that we want to communicate with the kube-apiserver.

Because we configure our kube-apiserver in a high availability configuration, separate from the kube-controller-manager, we also generated a shared secret for those components to use with the --service-account-private-key-file flag and write it to the generic secrets backend:

openssl genrsa 4096 > token-key
vault write secret/$CLUSTER_ID/k8s/token key=@token-key
rm token-key

In addition to these roles, we created individual policies for each component of the cluster which are used to restrict which paths individual vault tokens can access. Here, we created a policy for etcd members that will only have access to the path to create an etcd member certificate.

cat <<EOT | vault policy-write $CLUSTER_ID/pki/etcd/member -
path "$CLUSTER_ID/pki/etcd/issue/member" {
  policy = "write"
}
EOT

This kube-apiserver policy only has access to the path to create a kube-apiserver certificate and to read the service account private key generated above.

cat <<EOT | vault policy-write $CLUSTER_ID/pki/k8s/kube-apiserver -
path "$CLUSTER_ID/pki/k8s/issue/kube-apiserver" {
  policy = "write"
}
path "secret/$CLUSTER_ID/k8s/token" {
  policy = "read"
}
EOT

Now that we have the structure of CAs and policies created in Vault, we need to configure each component to fetch and renew its own certificates.

Getting Certificates

We provided each machine with a Vault token that can be renewed indefinitely. This token is only granted the policies that it requires. We set up the token role in Vault with:

vault write auth/token/roles/k8s-$CLUSTER_ID \
period="720h" \
orphan=true \
allowed_policies="$CLUSTER_ID/pki/etcd/member,$CLUSTER_ID/pki/k8s/kube-apiserver..."

Then, we built tokens from that token role with the necessary policies for the given node. As an example, the etcd nodes were provisioned with a token generated from this command:

vault token-create \
  -policy="$CLUSTER_ID/pki/etcd/member" \
  -role="k8s-$CLUSTER"

All that is left now is to configure each service with the appropriate certificates.

Configuring the Services

We chose to use consul-template to configure services since it will take care of renewing the Vault token, fetching new certificates, and notifying the services to restart when new certificates are available. Our etcd node consul-template configuration is:

{
  "template": {
    "source": "/opt/consul-template/templates/cert.template",
    "destination": "/opt/certs/etcd.serial",
    "command": "/usr/sbin/service etcd restart"
  },
  "vault": {
    "address": "VAULT_ADDRESS",
    "token": "VAULT_TOKEN",
    "renew": true
  }
}

Because consul-template will only write one file per template and we needed to split our certificate into its components (certificate, private key, and issuing certificate), we wrote a custom plugin that takes in the data, a file path, and an file owner. Our certificate template for etcd nodes uses this plugin:

{{ with secret "$CLUSTER_ID/pki/data/issue/member" "common_name=$FQDN"}}
{{ .Data.serial_number }}
{{ .Data.certificate | plugin "certdump" "/opt/certs/etcd-cert.pem" "etcd"}}
{{ .Data.private_key | plugin "certdump" "/opt/certs/etcd-key.pem" "etcd"}}
{{ .Data.issuing_ca | plugin "certdump" "/opt/certs/etcd-ca.pem" "etcd"}}
{{ end }}

The etcd process was then configured with the following options so that both peers and clients must present a certificate issued from Vault in order to communicate:

--peer-cert-file=/opt/certs/etcd-cert.pem 
--peer-key-file=/opt/certs/etcd-key.pem 
--peer-trusted-ca-file=/opt/certs/etcd-ca.pem 
--peer-client-cert-auth
--cert-file=/opt/certs/etcd-cert.pem 
--key-file=/opt/certs/etcd-key.pem 
--trusted-ca-file=/opt/certs/etcd-ca.pem 
--client-cert-auth

The kube-apiserver has one certificate template for communicating with etcd and one for the Kubernetes components, and the process is configured with the appropriate flags:

--etcd-certfile=/opt/certs/etcd-cert.pem 
--etcd-keyfile=/opt/certs/etcd-key.pem 
--etcd-cafile=/opt/certs/etcd-ca.pem
--tls-cert-file=/opt/certs/apiserver-cert.pem 
--tls-private-key-file=/opt/certs/apiserver-key.pem 
--client-ca-file=/opt/certs/apiserver-ca.pem 

The first three etcd flags allow the kube-apiserver to communicate with etcd with a client certificate; the two TLS flags allow it to host the API over a TLS connection; the last flag allows it to verify clients by ensuring that their certificates were signed by the same CA that issued the kube-apiserver certificate.

Conclusion

Each component of the architecture is issued a unique certificate and the entire process is fully automated. Additionally, we have an audit log of all certificates issued, and frequently exercise certificate expiration and rotation.

We did have to put in some time up front to learn Vault, discover the appropriate command line arguments, and integrate the solution discussed here into our existing configuration management system. However, by using Vault as a certificate authority, we drastically reduced the effort required to set up and maintain many Kubernetes clusters.


Add Exif data back to Facebook images

Published 4 Sep 2016 by addshore in Addshore.

I start this post not by talking about Facebook, but about Google Photos. Google now offers unlimited ‘high resolution’ images within its service where high resolution is defined as 16MP for an image and 1080p for video. Of course there is some compression here that some may argue against but photos and video can also be uploaded at original quality (exactly as captured) and the cost of space for these files is very reasonable. So, It looks like I have found a new home for my piles of photos and videos that I want to be able to look back at in 20 years!

Prior to Google Photos developments I stored a reasonable number of images on Facebook, and now I want to also add them all to Google Photos, but that is not as easy as I first thought. All of your Facebook data can easily be downloaded which includes all of your images and videos, but not exactly as they were when you uploaded them, as they have all of the exif data such as location and time stripped. This data is actually available in a html file which is served with each Facebook album. So, I wrote a terribly hacky script in PHP for Windows to extract that data and re add it to the files so that they can be bulk uploaded to Google Photos and take advantage of the timeline and location features.

The code can be found below (it looks horrible but works…)

<?php

// README: Set the path to the extracted facebook dump photos directory here
$directory = 'C:/Users/username/Downloads/facebook-username/photos';

// http://www.sno.phy.queensu.ca/~phil/exiftool/
// README: Download this and set the path here (of the renamed exe)
$tool = 'C:\Users\username\exiftool.exe';

////////////////////////////////////////////////
//     Do not touch anything below here...    // =]
////////////////////////////////////////////////

echo "Starting\n";

$albums = glob( $directory . '/*', GLOB_ONLYDIR );

foreach ( $albums as $album ) {
    echo "Running for album $album\n";
    $indexFile = $album . '/index.htm';
    $dom = DOMDocument::loadHTMLFile( $indexFile );
    $finder = new DomXPath( $dom );
    $blockNodes = $finder->query( "//*[contains(concat(' ', @class, ' '), ' block ')]" );
    foreach ( $blockNodes as $blockNode ) {
        $imageNode = $blockNode->firstChild;
        $imgSrc = $imageNode->getAttribute( 'src' );
        $imgSrcParts = explode( '/', $imgSrc );
        $imgSrc = array_pop( $imgSrcParts );
        $imgLocation = $album . '/' . $imgSrc;

        echo "Running for file $imgLocation\n";

        $details = array();
        $metaDiv = $blockNode->lastChild;
        $details['textContent'] = $metaDiv->firstChild->textContent;
        $metaTable = $metaDiv->childNodes->item( 1 );
        foreach ( $metaTable->childNodes as $rowNode ) {
            $details[$rowNode->firstChild->textContent] = $rowNode->lastChild->textContent;
        }

        $toChange = '';

        $toChange[] = '"-EXIF:ModifyDate=' . date_format( new DateTime(), 'Y:m:d G:i:s' ) . '"';

        if ( array_key_exists( 'Taken', $details ) ) {
            $toChange[] = '"-EXIF:DateTimeOriginal=' .
                date_format( new DateTime( "@" . $details['Taken'] ), 'Y:m:d G:i:s' ) .
                '"';
        } else {
            continue;
        }
        if ( array_key_exists( 'Camera Make', $details ) ) {
            $toChange[] = '"-EXIF:Make=' . $details['Camera Make'] . '"';
        }
        if ( array_key_exists( 'Camera Model', $details ) ) {
            $toChange[] = '"-EXIF:Model=' . $details['Camera Model'] . '"';
        }
// Doing this will cause odd rotations.... (as facebook has already rotated the image)...
//      if ( array_key_exists( 'Orientation', $details ) ) {
//          $toChange[] = '"-EXIF:Orientation=' . $details['Orientation'] . '"';
//      }
        if ( array_key_exists( 'Latitude', $details ) && array_key_exists( 'Longitude', $details ) ) {
            $toChange[] = '"-EXIF:GPSLatitude=' . $details['Latitude'] . '"';
            $toChange[] = '"-EXIF:GPSLongitude=' . $details['Longitude'] . '"';
            // Tool will look at the sign used for NSEW!
            $toChange[] = '"-EXIF:GPSLatitudeRef=' . $details['Latitude'] . '"';
            $toChange[] = '"-EXIF:GPSLongitudeRef=' . $details['Longitude'] . '"';
            $toChange[] = '"-EXIF:GPSAltitude=' . '0' . '"';
        }

        exec( $tool . ' ' . implode( ' ', $toChange ) . ' ' . $imgLocation );
    }
}

echo "Done!\n";

I would rewrite it but I have no need to (as it works). But when searching online for some code to do just this I came up short and thus thought I would post the rough idea and process for others to find, and perhaps improve on.


Karateka: The Alpha and the Beta

Published 31 Aug 2016 by Jason Scott in ASCII by Jason Scott.

As I enter into a new phase of doing things and how I do things, let’s start with something pleasant.

karateka-beta

As part of the work with pulling Prince of Persia source code from a collection of disks a number of years back (the lion’s share of the work done by Tony Diaz), Jordan Mechner handed me an additional pile of floppies.

Many of these floppies have been imaged and preserved, but a set of them had not, mostly due to coming up with the time and “doing it right” and all the other accomplishment-blocking attributes of fractal self-analysis. That issue is now being fixed, and you are encouraged to enjoy the immediate result.

As Karateka (1985) became a huge title for Brøderbund Software, they wanted the program to run on as many platforms as possible. However, the code was not written to be portable; Brøderbund instead contracted with a number of teams to make Karateka versions on hardware other than the Apple II. The work by these teams, Jordan Mechner told me, often suffered from being ground-up rewrites of the original game idea – they would simply make it look like the game, without really spending time duplicating the internal timing or logic that Jordan had put into the original. Some came out fine on the other end; others did not.

Jordan’s opinion on the IBM port of Karateka was not positive. From his Making-of-Karateka journal (thanks to  for finding this entry):

CrN5pu9XYAIo4es

You can now see how it looked and played when he made these comments. I just pulled out multiple copies of Karateka from a variety of internally distributed floppies Jordan had in the set he gave me. I chose two representative versions and now you can play them both on the Internet Archive.

screenshot_02The first version is what would now be called the “Alpha”, but which in this collection is just called “Version 1986-01-30”, and was duplicated on February 4, 1986. It is a version which was obviously done as some sort of milestone – debugging information is everywhere, and it starts with a prompt of which levels to try, before starting the game.

Without going too much into the specific technical limitations of PC Compatibles of the time, I’ll instead just offer the following screenshot, which will connect you to an instantly playable-in-browser version of the Karateka Alpha. This has never been released before.

alpha-karateka

You can see all sorts of weird artifacts and performance issues with the Alpha – glitches in graphics and performance, and of course the ever-present debugging messages and system information. The contractors doing the work, the Connelly Group, have no presence on the internet in any obvious web searches – they may have just been internal employees, or a name given to some folks just to keep distance between “games” work and “real” work; maybe that information will come out.

The floppy this came on, as shown above, had all sorts of markings for Brøderbund to indicate what the build number was, who had the floppies (inventory control), and that the disk had no protection routines on it, which makes my life in the present day notably easier. Besides the playable version of the information in a ZIP file, there is an IMG file of the entire 360k floppy layout, usable by a number of emulators or viewers.

The next floppy in terms of time stamp is literally called BETA, from March 3, 1986. With over a month of effort into the project, a bunch of bugs have been fixed, screens added, and naturally all the debugging information has been stripped away. I’m assuming this was for playtesters to check out, or to be used by marketing/sales to begin the process of selling it in the PC world. Here is a link to an instantly playable-in-browser version of the Karateka Beta. This has also never been released before.

beta

For the less button-mashy of us, here are the keys and a “handing it over to you at the user group meeting” version of how Karateka works.

You’re a dude going from the left to the right. If you go too far left, you will fall off the cliff and die. To the right are a bunch of enemies. You can either move or fight. If you are not in a fighting stance, you will die instantly, but in a fighting stance, you will move really slowly.

You use the arrow keys (left and right) to move. Press SPACE to flip between “moving” and “fighting” modes. The Q, A, and Z keys are high, middle and low punches. The W, S and X keys are high middle and low kicks. The triangles on the bottom are life meters. Whoever runs out of triangles first in a fight will die.

It’s worthwhile to note that the games, being an Alpha and a Beta, are extremely rough. I wouldn’t suggest making them your first game of Karateka ever – that’s where you should play the original Apple II version

Karateka is a wealth of beginnings for understanding entertainment software – besides being a huge hit for Brøderbund, it’s an aesthetic masterwork, containing cinematic cutscenes and a clever pulling of cultural sources to combine into a multi-layered experience on a rather simple platform. From this groundwork, Jordan would go on to make Prince of Persia years later, and bring these efforts to another level entirely. He also endeavored to make the Prince of Persia code as portable and documented as possible, so different platforms would have similar experiences in terms of playing.

In 2012, Jordan released a new remake/reboot of Karateka, which is also cross-platform (the platforms now being PC, iOS, PS4, Xbox and so on) and is available at KARATEKA.COM. It is a very enjoyable remake. There are also ports of “Karateka Classic” for a world where your controls have to be onscreen, like this one.

In a larger sense, it’d be a wonderful world where a lot of old software was available for study, criticism, discussion and so on. We have scads of it, of course, but there’s so much more to track down. It’s been a driving effort of mine this year, and it continues.

But for now, let’s enjoy a really, really unpleasant but historically important version of Karateka.


HTTPS is live on Piwigo.com

Published 26 Aug 2016 by Pierrick Le Gall in The Piwigo.com Blog.

Some of you were waiting for it, others don’t know yet what it’s all about!

HTTPS is the way to encrypt communications between your web browser and the website you visit. Your Piwigo for instance. It is mainly useful for the log in form and administration pages. Your password is no longer sent in “plain text” through internet nodes, like your internet provider or Piwigo.com servers.

SSL certificate in action for HTTPS

SSL certificate in action for HTTPS

How to use it?

For now, Piwigo doesn’t automatically use HTTPS. You have to switch manually if you want HTTPS. Just add “s” after “http” in the address bar of your web browser.

In the next few days or weeks, Piwigo will automatically switch to HTTPS on the login form and the pages you open afterwards.

Why wasn’t HTTPS already available?

Piwigo.com was born 6 years ago and HTTPS already existed at that time. Here are the 3 main reasons for the wait:

  1. Piwigo is a photo management software, not a bank. Such a level of security level was not considered as a priority, compared to other features.
  2. the Piwigo application and its related project, without considering Piwigo.com hosting, have needed some code changes to work flawlessly with HTTPS. Today we’re proud to say Piwigo works great with multiple addresses, with or without HTTPS. Piwigo automatically uses the appropriate web address. If you have worked with other web application, you certainly know how much Piwigo makes your life easy when dealing with URLs.
  3. the multiple servers infrastructure on Piwigo.com, with multiple sub-domains *.piwigo.com have made the whole encryption system a bit complex. Without going into details, and for those of you interested, we use a wildcard SSL certificate from Gandi. Nginx reverse proxy on frontend server runs on it. So does Nginx on backend servers. All communication between Piwigo.com servers is encrypted when you use HTTPS.

What about custom domain names?

11.5% of Piwigo.com accounts are using a custom domain name. They have more than a *.piwigo.com web address.

Each SSL certificate, which is the “key” for encryption, is dedicated to a domain name. In this case, our SSL certificate is only “trusted” for *.piwigo.com.

You can try to use your domain name with HTTPS, but your web browser will display a huge security warning. If you say to your web browser “it’s OK, I understand the risk”, then you can use our certificate combined to your domain name.

The obvious solution is to use Let’s Encrypt, recently released. It will let us generate custom certificates, perfectly compliant with web browser requirements. We will work on it.


Kenny Austin and Friends at the Odd Fellow

Published 21 Aug 2016 by Dave Robertson in Dave Robertson.

Share


Basic iPhone security for regular people

Published 18 Aug 2016 by Carlos Fenollosa in Carlos Fenollosa — Blog.

Real life requires a balance between convenience and security. You might not be a high-profile person, but we all have personal information on our phones which can give us a headache if it falls into the wrong hands.

Here are some options you can enable to harden your iPhone in the case of theft, a targeted attack or just a curious nephew who's messing with your phone.

Even if you don't enable them all, it's always nice to know that these features exist to protect your personal information. This guide is specific for iPhones, but I suppose that most of them can be directly applied to other phones.

Password-protect your phone

Your iPhone must always have a password. Otherwise, anybody with physical access to your phone will get access to all your information: calendar, mail, pictures or *gasp* browser history.

Passwords are inconvenient. However, even a simple 4-digit code will stop casual attackers, though it is not secure against a resourceful attacker

☑ Use a password on your phone: Settings > Touch ID & Passcode

Furthermore, enable the 10-attempt limit, so that people can't brute-force your password.

☑ Erase data after 10 attempts: Settings > Touch ID & Passcode > Erase data (ON)

If your phone has Touch ID, enable it, and use a very long and complicated password to unlock your phone. You will only need to input it on boot and for a few options. It is reasonably secure and has few drawbacks for most users. Unless you have specific reasons not to do it, just go and enable Touch ID.

☑ Enable Touch ID: Settings > Touch ID & Passcode

Regarding password input, and especially if your phone doesn't have Touch ID, using a numeric keyboard is much faster than the QWERTY one. Here's a trick that will help you choose a secure numeric password which is easy to remember.

Think of a word and convert it to numbers as if you were dialing them on a phone, i.e. ABC -> 2, DEF -> 3, ..., WYZ -> 9. For example, if your password is "PASSWORD", the numeric code would be 72779673.

The iPhone will automatically detect that the password contains only numbers and will present a digital keyboard on the lock screen instead of a QWERTY one, making it super easy to remember and type while still keeping a high level of security.

☑ If you must use a numeric password, use a long one: Settings > Touch ID & Passcode

Harden your iPhone when locked

A locked phone can still leak private data. Accessing Siri, the calendar or messages from the lock screen is handy, but depending on your personal case, can give too much information to a thief or attacker.

Siri is a great source of data leaks, and I recommend that you disable it when your phone is locked. It will essentially squeal your personal info, your contacts, tasks or events. A thief can easily know everything about you or harass your family if they get a hand on a phone with Siri enabled on the lock screen.

This setting does not disable Siri completely; it just requires the phone to be unlocked for Siri to work.

☑ Disable Siri when phone is locked: Settings > Touch ID & Passcode > Siri

If you have confidential data on your calendar, you may also want to disable the "today" view which usually includes your calendar, reminders, etc.

☑ Disable Today view: Settings > Touch ID & Passcode > Today

Take a look at the other options there. You may want to turn off the notifications view, or the option to reply with a message. An attacker may spoof your identity by answering messages while the phone is locked, for example, taking advantage from an SMS from "Mom" and tricking her into asking for her maiden name, pet names, etc., which are usually answers to secret questions to recover your password.

☑ Disallow message replies when the phone is locked: Settings > Touch ID & Passcode > Reply with Message

Having your medical information on the emergency screen has pros and cons. Since I don't have any dangerous conditions, I disable it. Your case may be different.

Someone with your phone can use Medical ID to get your name and picture, which may be googled for identity theft or sending you phishing emails. Your name can also be searched for public records or DNS whois information, which may disclose your home phone, address, date of birth, ID number and family members.

In summary, make it sure that somebody who finds your locked phone cannot discover who you are or interact as if they were you.

☑ Disable Medical ID: Health > Medical ID > Edit > Show When Locked

Some people think that letting anyone find out the owner of the phone is a good idea, since an honest person who finds your lost phone can easily contact you. However, you can always display a personalized message on your lock screen if you report your phone missing on iCloud.

☑ Enable "Find my phone": Settings > iCloud > Find my iPhone > Find My iPhone

Make sure that your phone will send its location just before it runs out of battery

☑ Enable "Find my phone": Settings > iCloud > Find my iPhone > Send Last Location

To finish this section, if you don't have the habit of manually locking your phone after you use it, or before placing it in your pocket, configure your iPhone to do it automatically:

☑ Enable phone locking: Settings > General > Auto-Lock

Harden the hardware

Your phone is now secure and won't sing like a canary when it gets into the wrong hands.

However, your SIM card may. SIMs can contain personal information, like names, phones or addresses, so they must be secured, too.

Enable the SIM lock so that, on boot, it will ask for a 4-digit code besides your phone password. It may sound annoying, but it isn't. It's just an extra step that you only need to perform once every many days, when your phone restarts.

Otherwise, a thief can stick the SIM in another phone and access that information and discover your phone number. With it, you may be googled, or they may attempt phishing attacks weeks later.

Beware that this strategy doesn't allow the phone to ping home after it has been shut down and turned on.

☑ Enable SIM PIN: Settings > Phone > SIM PIN

Enable iCloud. When your phone is associated with an iCloud account, it is impossible for another person to use it, dropping its resale value to almost zero. I've had some friends get their phones back after a casual thief tried to sell them unsuccessfully thanks to the iCloud lock and finally decided to do the good thing and return it.

☑ Enable iCloud: Settings > iCloud

If you have the means, try to upgrade to an iPhone 5S or higher. These phones contain a hardware element called Secure Enclave which encrypts your personal information in a way that can't even be cracked by the FBI. If your phone gets stolen by a professional, they won't be able to solder the flash memory into another device and recover your data.

☑ Upgrade to a phone with a Secure Enclave (iPhone 5S or higher)

Harden your online accounts

In reality, your online data is much more at risk than your physical phone. Botnets constantly try to find vulnerabilities in services and steal user passwords.

The first thing you must do right now is to install a password manager. Your iPhone has one built into the system, which is good enough to generate unique password and auto-fill them when needed.

If you don't like Apple's Keychain, I recommend LastPass and 1Password.

Why do you need a password manager? The main reason is to avoid having a single password for all services. The popular trick of having a weak password for most sites and another strong password for important sites is a dangerous idea.

Your goal is to have a different password for each site/service, so that if it gets attacked or you inadvertently leak it to a phishing attack, it is no big deal and doesn't affect all your accounts.

Just have a different one for each service and let the phone remember all of them. I don't know my passwords: Gmail, Facebook, Twitter, my browser remembers them for me.

☑ Use a password manager: Settings > iCloud > Keychain > iCloud Keychain

There is another system which complements passwords, called "Two-Factor Authentication", or 2FA. You have probably used it in online banking; they send you an SMS with a confirmation code that you have to enter somewhere.

If your password gets stolen, 2FA is a fantastic barrier against an attacker. Without your phone, they can't access your data, even if they have all your passwords.

☑ Use 2FA for your online accounts: manual for different sites

2FA makes it critical to disable SMS previews, because if a thief steals your phone and already has some of your passwords, he can use your locked phone to read 2FA SMS.

If you use iMessage heavily, this may be cumbersome, so decide for yourself.

☑ Disable SMS previews on locked phone: Settings > Notifications > Messages > Show Previews

Make it easy to recover your data

If the worst happens, and you lose your phone, get it stolen or drop it on the Venice canals, plan ahead so that the only loss is the money for a new phone. You don't want to lose your pictures, passwords, phone numbers, events...

Fortunately, iPhones have a phenomenal backup system which can store your phone data in the cloud or your Mac. I have a Mac, but I recommend the iCloud backup nonetheless.

Apple only offers 5 GB of storage in iCloud, which is poor, but fortunately, the pricing tiers are fair. For one or two bucks a month, depending on your usage, you can buy the cheapest and most important digital insurance to keep all your data and pictures safe.

iCloud backup can automatically set up a new phone and make it behave exactly like your old phone.

If you own a Mac, once you pay for iCloud storage, you can enable the "iCloud Photo Library" on Settings > iCloud > Photos > iCloud Photo Library for transparent syncing of all your pictures between your phone and your computer.

☑ Enable iCloud backup: Settings > iCloud > Backup > iCloud Backup

If you don't want the iCloud backup, at least add a free iCloud account or any other "sync" account like Google's, and use it to store your contacts, calendars, notes and Keychain.

☑ Enable iCloud: Settings > iCloud

Bonus: disable your phone when showing pictures

Afraid of handing your phone over to show somebody a picture? People have a tendency to swipe around to see other images, which may be a bad idea in some cases.

To save them from seeing things that can't be unseen, you can use a trick with the Guided Access feature to lock all input to the phone, yet still show whatever is on the screen.

☑ Use Guided Access to lock pictures on screen: Read this manual

This is not a thorough guide

As the title mentions, this is an essential blueprint for iPhone users who are not a serious target for digital theft. High-profile people need to take many more steps to secure their data. Still, they all implement these options too.

The usual scenario for a thief who steals your phone at a bar is as follows: they will turn it off or put it in airplane mode and try to unlock it. Once they see that it's locked with iCloud, they can either try to sell it for parts, return it or discard it.

Muggers don't want your data. However, it doesn't hurt to implement some security measures.

In worse scenarios, there are criminal companies specialized in buying stolen phones at a very low price and perform massive simple attacks to unsuspecting users to trick them into unlocking the phone or giving up personal data.

You don't need the same security as Obama or Snowden. Nonetheless, knowing how your phone leaks personal information and the possible attack vectors is important in defending yourself from prying eyes.

You have your whole life on your phone. In the case of an unfortunate theft, make it so the only loss is the cost of a new one.

Tags: security

Comments? Tweet  


Faster and More Accessible: The New digitalocean.com

Published 16 Aug 2016 by DigitalOcean in DigitalOcean Blog.

It's here! The new digitalocean.com launched last week, and we're so excited to share it with you.

We unified the site with our updated branding, but more importantly, we focused on improving the site's accessibility, organization, and performance. This means that you'll now have faster load times, less data burden, and a more consistent experience.

This rebuild is a nod to the values at the core of our company: we want to build fast, reliable products that anyone can use. So how did we make our site twice as fast and WCAG AA compliant? Read on:

website

Accessibility

One of the biggest concerns we had for our website redesign was making it accessible for users with low vision, people who use screen readers, and users who navigate via keyboard. Our primary focus was to be WCAG 2.0 AA compliant in terms of color contrast and to use accurately semantic HTML. This alone took care of most of the accessibility concerns we faced.

We also made sure to include text with any descriptive icons and images. Where we couldn't use native HTML or SVG elements, we used ARIA roles and attributes, especially focusing on our forms and interactive elements. The design team did explorations based on the various ways people may perceive color and put our components through a variety of tests to make sure these were also accounted for.

control panel color palette

We keep track of our progress on an internally-hosted application called pa11y, and when we uploaded our new site to the staging server initially, seeing the drop in errors and warnings made all of the audits worth it:

pa11y dashboard

A Unified System

The old digitalocean.com CSS had thousands of rules, declarations, and unique colors. The un-gzipped file size came out to a whopping 306 kB.

For the redesign, we implemented a new design system called Float based on reusable components and utility classes to simplify and streamline our styles. With the Float framework, which we hope to open source soon, we were able to get the CSS file size down to almost a quarter of its original size: only 80kB!

We also dramatically reduced the complexity of our CSS and unified our design. We now have:

This framework allowed us to have a reference to existing code contained in a map that we referenced instead of creating new variable units. This is how we got reduced the size of our media queries by 89%. We also used utility classes (such as u-mb--large, which translates to "utility, margin-bottom, large") to unify our margin and padding sizes, which reduced the number of unique spacing resets previously sent down to users by 75%.

Not only is the CSS more unified throughout the site, both visually and variably, it is also much more performant as a result, saving users both time and data.

Front-end Performance

The largest pain point in terms of load time on the web in general is easily media assets. According to the HTTP archive, as of July 15 of 2016, the average web page is 2409 kB. Images make up about 63% of this, at an average of 1549 kB. On the new digitalocean.com, we've kept this in mind, and had a higher goal for our site assets: less than 1000 kB with a very fast first load time.

We use SVG images for most of our media and icons throughout the site, which are generally much smaller than .jpg or .png formats due to their nature; SVGs are instructions for painting images rather than raster full images themselves. This also means that the images can scale and shrink with no loss of quality in their designs across various devices.

We've also built an icon sprite system using <symbol> and <use> to access these icons. This way, they can be shared in a single resource download for the user throughout the site. Like our scripts, we minify these sprites to eliminate additional white space, as well as minify all of our media assets automatically through our gulp-based build process.

There was one asset, however, that rang in at 600 kB on the old digitalocean.com: the animated gif on the homepage. Gifs are huge file formats, but can be very convenient. To minify this asset as much as possible, we manually edited it in Photoshop to reduce the color range to necessary colors and manipulated the frame count by hand. This saved 200 kB from the already-automatically-optimized gif alone without reducing its physical size, getting our site down to that goal of less than 1000 kB.

site comparison summary

Conclusion

There is always more work to be done in terms of improved performance and better accessibility, but we're proud of the improvements we've made so far and we'd love to hear what you think of the new digitalocean.com!


Delete non-existant pages from MediaWiki that are listed in Special:allPages

Published 12 Aug 2016 by DeathCamel57 in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I'm trying to delete two pages:

These two pages are in the Special:AllPages list even though they return 404 not found status.

How can I delete them?


PHP does not handle "bigger" http requests correctly

Published 8 Aug 2016 by user6681109 in Newest questions tagged mediawiki - Server Fault.

After an OS update, "bigger" HTTP requests are no longer handled correctly by the web server/PHP/MediaWiki. Wiki article content is truncated after about 6K characters and MediaWiki reports a loss of session.

Symptoms: I first recognized the error with my formerly working installation of MediaWiki (PHP). When I edit an article and its size grows bigger than approx. 6k characters, the article text is truncated and the MediaWiki rejects to save the new text but reports a lost session error. Smaller articles are not affected.

Question: Is this possibly a bug in PHP? Should I file a bug report? Or am I doing something wrong? Is something misconfigured?

Context: At home, I recently updated my raspbian LAMP server from wheezy to jessie. It all worked well before.

  1. Operating system: Raspbian jessie (formerly wheezi) on a Raspberry Pi.
  2. Apache 2.4.
  3. phpinfo() shows no indication of suhosin, which is sometimes reported to cause problems with larger http requests. Also, other PHP parameters that are sometimes mentioned as relevant on the web are unsuspicious: PHP Version 5.6.24-0+deb8u1. max_input_time=60, max_execution_time=30, post_max_size=8M

What I tried so far:

  1. Other PHP program: To investigate further, I uploaded files through a simple PHP file upload script. Similar problem; file upload does not work. (For your reference, the code of the upload script was taken from here: http://www.codingcage.com/2014/12/simple-file-uploading-with-php.html The script uses simple form data, no Ajax, no JSON, ...)
  2. Larger file causes split: Moreover, larger http file upload requests (using files of several hundred KB) are seemingly split into two requests. The apache access log file shows (remember this is actually only a single request from the browser):
    • ... - - [05/Aug/2016:10:52:38 +0200] "POST /simpleupload.php HTTP/1.1" 200 85689 "https://.../simpleupload.php" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0"
    • ... - - [05/Aug/2016:10:52:38 +0200] "\xb4W\xcd\xff" 400 557 "-" "-" -
  3. Other browsers: The behavior can be replicated with different browsers: Firefox on Linux, Firefox 38 on Windows, and elinks browser on same machine.
  4. Eliminate network problems: I used elinks to access the webserver on localhost. Same problems in MediaWiki and the PHP file upload script.
  5. Increased Log level: Increasing the Apache LogLevel to debug does not bring up any new information during request handling.
  6. Error does not occur with Perl: The problem does not occur with a different file upload script written in Perl. File upload works properly. So, it does not seem to be a problem with OS, Apache, Browser, ...

Remarks: This is my attempt to rephrase my locked/on-hold question https://unix.stackexchange.com/questions/301444/small-http-requests-get-truncated, which I cannot edit anymore.


Can preloaded text for edit pages use templates that change depending on page creator's wishes?

Published 7 Aug 2016 by user294584 in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I'm making a collaborative fiction writing site with MediaWiki that will host stories by different authors. Some stories will allow any kind of editing, others just minor changes, others just typo fixes, others no changes at all except after discussion, etc.

I found the way to change MediaWiki:copyrightwarning2, and put I generalized message there, but I'd really like a way for authors to customize a page that gets pulled in, perhaps by a template, into preload text that appears at the top of the edit page.

If it's just for all pages they've authored, that would be fine, but ideally it could be on a per-story basis.

Is there a way to implement such a thing?


Does snappy packaging make a convenient, portable, offline MediaWiki possible (say, on a thumb drive)?

Published 28 Jul 2016 by wattahay in Newest questions tagged mediawiki - Ask Ubuntu.

I LOVE MediaWiki, and would love to have one as a personal wiki solution on a thumb drive. There seem to be solutions for this on Windows, via XAMP. But from what I can tell, linux does not allow this.

Now that snaps are here, I am wondering if they make such a technology more accessible.

How does one go about creating a portable, offline Mediawiki, on -- say -- a thumb drive? (I apologize, because I realize this forum could be the wrong place if this has nothing to do with snaps.)

Thank you for any direction on this ahead of time.


Living in a disrupted economy

Published 21 Jul 2016 by Carlos Fenollosa in Carlos Fenollosa — Blog.

There is this continuing discussion on whether technology destroys more jobs than it creates. Every few years, yet another tech revolution occurs, journalists publish articles, pundits share their opinions, politicians try to catch up, and those affected always voice their concerns. These couple years have been no exception, thanks to Uber, Airbnb, and the called sharing economy.

I'm a technologist and a relatively young person, so I am naturally biased towards technological disruption. After all, it is people like me who are trying to make a living by taking over older jobs.

I suggest that you take a few minutes to read a fantastic article titled The $3500 shirt. That essay reveals how horrible some industries were before they could be automated or replaced by something better. Go on, please read it now, it will only take three minutes.

Now, imagine you had to spend a couple of weeks of your time to make a t-shirt from scratch. Would that be acceptable? I guess we all more or less agree that the textile revolution was a net gain for society. Nevertheless, when it occurred, some Luddites probably complained, arguing that the loom put seamstresses out of work.

History is packed with dead industries. We killed the ice business with the modern fridge. We burn less coal for energy, so miners go unemployed. And let's not forget the basis of modern civilization, the agricultural revolution, which is the only reason us humans can feed ourselves. Without greenhouses, nitrates, tractors, pest protection and advancements in farming, humanity would starve.

Admittedly, it transformed the first sector from a 65% in workforce quota into the current 10%. Isn't it great that most of us don't need to wake up before sunrise to water our crops? In hindsight, can you imagine proclaiming that the 1800s way of farming is better because it preserves farming jobs?

The bottom line is that all economic transformations are a net gain for society. They may not be flawless, but they have allowed us humans to live a better life.

So why do some characters fight against current industry disruptions if history will prove them wrong?

******

As a European and a social democrat, I believe that States must regulate some economies to avoid monopolies and abuses, supporting the greater good. Furthermore, I sympathize with the affected workforce, both personally and in a macroeconomic level. All taxi drivers suddenly going jobless because of Uber is detrimental to society.

However, it pains me to see that European politicians are taking the opposite stance, brandishing law and tradition as excuses to hinder progress.

Laws must serve people, not the other way around. If we analyze the taxi example, we learn that there is a regulation which requires taxi drivers to pay a huge sum of money up front to operate. Therefore, letting anybody get in that business for free is unfair and breaks the rules of the game. Unsurprisingly, this situation is unfair not because of the new players, but because that regulation is obsolete.

It isn't ethically right that somebody who spent a lot of money to get a license sees their job at risk. But the solution isn't to block other players, especially when it's regulation which is at fault. Let's sit down, think how to establish a transition period, and maybe even reimburse drivers part of that money with the earnings from increased taxes due to a higher employment and economic activity.

There is a middle ground solution: don't change the rules drastically, but don't use these them as an excuse to impede progress.

At the end of the day, some careers are condemned to extinction. That is a real social drama, however, what should we do? Artificially stop innovation to save jobs which are not efficient and, when automated or improved, they make the world better for everyone?

******

Us millennials have learned that the concept of a single, lifetime profession just does not exist anymore. Previous generations do not want to accept that reality. I understand that reconverting an older person to a new career may be difficult, but if the alternative is letting that person obstruct younger people's opportunities, that's not fair.

Most professions decline organically, by the very nature of society and economy. It is the politicians' responsibility to mediate when this process is accelerated by a new industry or technology. New or automated trades will take their place, usually providing a bigger collective benefit, like healthcare, education, or modern farming.

Our duty as a society is to make sure everyone lives a happy and comfortable life. Artificially blocking new technologies and economic models harms everyone. If it were for some Luddites, we'd be still paying $3500 for a shirt, and that seamstress would never have been a nurse or a scientist.

Tags: law, startups

Comments? Tweet  


Roots and Flowers of Quaker Nontheism (Abridged)

Published 19 Jul 2016 by Os Cresson in NontheistFriends.org.

This abridged version of “Roots and Flowers of Quaker Nontheism” was compiled for the convenience of students of Quaker nontheism. An ellipses ( . . . ) or brackets ([ ]) indicate where material has been omitted. The original is a chapter in Quaker and Naturalist Too (Morning Walk Press of Iowa City, IA, in 2014, is available from www.quakerbooks.org). The chapter includes text (pp. 65-103), bibliography (pp. 147-157), source notes (pp. 165-172), and references to 20 quotations that appear elsewhere in the book but are not in this abridged version.

Part I: Roots of Quaker Nontheism

This is a study of the roots of Quaker nontheism today. Nontheist Friends are powerfully drawn to Quaker practices but they do not accompany this with a faith in God. Nontheism is an umbrella term covering atheists, agnostics, secular humanists, pantheists, wiccaists, and others. You can combine nontheist with other terms and call yourself an agnostic nontheist or atheist nontheist, and so on. Some nontheists have set aside one version of God (e.g. as a person) and not another (e.g. as a word for good or your highest values). A negative term like nontheism is convenient because we describe our views so many different ways when speaking positively.

Many of the Quakers mentioned here were not nontheists but are included because they held views, often heretical in their time, that helped Friends become more inclusive. In the early days this included questioning the divinity of Christ, the divine inspiration of the Bible, and the concepts of heaven, hell, and immortality. Later Friends questioned miracles, the trinity, and divine creation. Recently the issue has been whether Quakers have to be Christians, or theists. All this time there were other changes happening in speech, clothing, marriage practices, and so on. Quakerism has always been in progress.

Views held today are no more authentic because they were present in some form in earlier years. However, it is encouraging to Quaker nontheists today to find their views and their struggle prefigured among Friends of an earlier day.

In the following excerpts we learn about Quaker skeptics of the past and the issues they stood for. These are the roots that support the flowers of contemporary Quaker nontheism. . . .

 First Generation Quaker Skeptics

Quakers were a varied group at the beginning. There was little effective doctrinal control and individuals were encouraged to think for themselves within the contexts of their local meetings. Many of the early traditions are key for nontheists today, such as the emphasis on actions other than talk and the injunction to interpret what we read, even Scripture. All the early Friends can be considered forerunners of the Quaker nontheists of today, but two people deserve special mention. Gerard Winstanley (1609–c.1660) was a Digger, or True Leveller, who became a Quaker. . . . He published twenty pamphlets between 1648 and 1652 and was a political and religious revolutionary. He equated God with the law of the universe known by observation and reason guided by conscience and love. Winstanley wrote,

“I’ll appeal to your self in this question, what other knowledge have you of God but what you have within the circle of the creation? . . . For if the creation in all its dimensions be the fullness of him that fills all with himself, and if you yourself be part of this creation, where can you find God but in that line or station wherein you stand.” [Source Note #1]

Winstanley also wrote,

“[T]he Spirit Reason, which I call God…is that spirituall power, that guids all mens reasoning in right order, and to a right end: for the Spirit Reason, doth not preserve one creature and destroy another . . . but it hath a regard to the whole creation; and knits every creature together into a onenesse; making every creature to be an upholder of his fellow.” [#2]

His emphasis was on the world around and within us: “O ye hear-say  Preachers, deceive not the people any longer, by telling them that this glory shal not be known and seen, til the body is laid in the dust. I tel you, this great mystery is begun to appear, and it must be seen by the material eyes of the flesh: And those five senses that is in man, shall partake of this glory.” [#3]

Jacob Bauthumley (1613–1692) was a shoemaker who served in the Parliamentary Army. . . . His name was probably pronounced Bottomley since this is how Fox spelled it. In 1650 he published The Light and Dark Sides of God, the only pamphlet of his that we have. This was declared blasphemous and he was thrown out of the army, his sword broken over his head, and his tongue bored. After the Restoration he became a Quaker and a librarian and was elected sergeant–at–mace in Leicester. For Bauthumley, God dwells in men and in all the rest of creation and nowhere else. We are God even when we sin. Jesus was no more divine than any person is, and the Bible is not the word of God. He wrote,

“I see that all the Beings in the World are but that one Being, and so he may well be said, to be every where as he is, and so I cannot exclude him from Man or Beast, or any other Creature: Every Creature and thing having that Being living in it, and there is no difference betwixt Man and Beast; but as Man carries a more lively Image of the divine Being then [than] any other Creature: For I see the Power, Wisdom, and Glory of God in one, as well as another onely in that Creature called Man, God appears more gloriously in then the rest. . . . And God loves the Being of all Creatures, yea, all men are alike to him, and have received lively impressions of the divine nature, though they be not so gloriously and purely manifested in some as in others, some live in the light side of God, and some in the dark side; But in respect of God, light and darkness are all one to him; for there is nothing contrary to God, but onely to our apprehension. . . . It is not so safe to go to the Bible to see what others have spoken and writ of the mind of God as to see what God speaks within me and to follow the doctrine and leadings of it in me.” [#4]

Eighteenth Century Quaker Skeptics

There were skeptical Quakers who asserted views such as that God created but does not run the universe, that Jesus was a man and not divine, that much of theology is superstition and divides people unnecessarily, and that the soul is mortal.

An example is John Bartram (1699–1777) of Philadelphia. . . . He was a farmer and perhaps the best known botanist in the American colonies. Bartram had a mystical feeling for the presence of God in nature and he supported the rational study of nature. In 1758 he was disowned by Darby Meeting for saying Jesus was not divine, but he continued to worship at that meeting and was buried there.

In 1761 he carved a quote from Alexander Pope over the door of his greenhouse: “Slave to no sect, who takes no private road, but looks through Nature up to Nature’s God.” In 1743 he wrote, “When we are upon the topic of astrology, magic and mystic divinity, I am apt to be a little troublesome, by inquiring into the foundation and reasonableness of these notions” In a letter to Benjamin Rush he wrote, “I hope a more diligent search will lead you into the knowledge of more certain truths than all the pretended revelations of our mystery mongers and their inspirations.” [#5] . . .

Free Quakers

These Friends were disowned for abandoning the peace testimony during the Revolutionary War. The Free Quakers cast the issue in more general terms. They supported freedom of conscience and saw themselves as upholding the original Friends traditions. They wrote:

“We have no new doctrine to teach, nor any design of promoting schisms in religion. We wish only to be freed from every species of ecclesiastical tyranny, and mean to pay a due regard to the principles of our forefathers . . . and hope, thereby, to preserve decency and to secure equal liberty to all. We have no designs to form creeds or confessions of faith, but [hope] to leave every man to think and judge for himself…and to answer for his faith and opinions to . . . the sole Judge and sovereign Lord of conscience.” [#6]

Their discipline forbade all forms of disownment: “Neither shall a member be deprived of his right among us, on account of his differing in sentiment from any or all of his brethren.” [#7]

There were several Free Quaker meetings, the longest lasting being the one in Philadelphia from 1781 to 1834.

Proto–Hicksites

. . . Hannah Barnard (1754–1825) of New York questioned the interpretation of events in the Bible and put reason above orthodoxy and ethics over theology. She wrote a manual in the form of a dialogue to teach domestic science to rural women. It included philosophy, civics, and autobiography. Barnard supported the French Revolution and insisted that masters and servants sit together during her visits. In 1802 she was silenced as a minister and disowned by Friends. She wrote,

“[N]othing is revealed truth to me, as doctrine, until it is sealed as such on the mind, through the illumination of that uncreated word of God, or divine light, and intelligence, to which the Scriptures, as well as the writings of many other enlightened authors, of different ages, bear plentiful testimony. . . . I therefore do not attach the idea or title of divine infallibility to any society as such, or to any book, or books, in the world; but to the great source of eternal truth only.” [#8]

Barnard also wrote, “under the present state of the Society I can with humble reverent thankfulness rejoice in the consideration that I was made the Instrument of bringing their Darkness to light.” [#9] On hearing Elias Hicks in 1819, she is said to have commented that these were the ideas for which she had been disowned. He visited her in 1824, a year before she died.

[Also mentioned in the original version of this essay are Job Scott (1751–1793), Abraham Shackleton (1752–1818), Mary Newhall (c.1780–1829) and Mary Rotch.]

Hicksites

The schism that started in 1827 involved many people but it is instructive to focus on one man at the center of the conflict. Elias Hicks (1748–1830) traveled widely, urging Friends to follow a God known inwardly and to resist the domination of others in the Society. He wrote,

“There is scarcely anything so baneful to the present and future happiness and welfare of mankind, as a submission to traditional and popular opinion, I have therefore been led to see the necessity of investigating for myself all customs and doctrines . . . either verbally or historically communicated . . . and not to sit down satisfied with any thing but the plain, clear, demonstrative testimony of the spirit and word of life and light in my heart and conscience.” [#10]

Hicks emphasized the inward action of the Spirit rather than human effort or learning, but he saw a place for reason. He turned to “the light in our own consciences, . . . the reason of things, . . . the precepts and example of our Lord Jesus Christ, (and) the golden rule.” [#11]

[Also mentioned: Benjamin Ferris (1780–1867).]

Manchester Free Friends

David Duncan (c.1825–1871), a former Presbyterian who had trained for the ministry, was a merchant and manufacturer in Manchester, England. He married Sarah Ann Cooke Duncan and became a Friend in 1852. He was a republican, a social radical, a Free Thinker, and an aggressive writer and debater. Duncan began to doubt Quaker views about God and the Bible and associated the Light Within with intellectual freedom. He developed a following at the Friends Institute in Manchester and the publication of his Essays and Reviews in 1861 brought the attention of the Elders. In it he wrote, “If the principle were more generally admitted that Christianity is a life rather than a formula, theology would give place to religion . . . and that peculiarly bitter spirit which actuates religionists would no longer be associated with the profession of religion.” [#12] In 1871 he was disowned and then died suddenly of smallpox. Sarah Ann Duncan and about 14 others resigned from their meeting and started what came to be called the Free Friends.

In 1873, this group approved a statement which included the following:

“It is now more than two years and a quarter since we sought, outside of the Society of Friends, for the liberty to speak the thoughts and convictions we entertained which was denied to us within its borders, and for the enjoyment of the privilege of companionship in “unity of spirit,” without the limitations imposed upon it by forced identity of opinion on the obscure propositions of theologians. We were told that such unity could not be practically obtained along with diversity of sentiment upon fundamental questions, but we did not see that this need necessarily be true where a principle of cohesion was assented to which involved tolerance to all opinions; and we therefore determined ourselves to try the experiment, and so remove the question, if possible, out of the region of speculation into that of practice. We conceived one idea in common, with great diversity of opinion amongst us, upon all the questions which divide men in their opinions of the government and constitution of the universe. We felt that whatever was true was better for us than that which was not, and that we attained it best by listening and thinking for ourselves.” [#13]

Joseph B. Forster (1831–1883) was a leader of the dissidents after the death of David Duncan. (For another excerpt, see pp. 17.) He wrote, “[E]very law which fixes a limit to free thought, exists in violation of the very first of all doctrines held by the Early Quakers,—the doctrine of the ‘Inner Light’.” [#14]

Forster was editor of a journal published by the Free Friends. In the first issue he wrote,

“We ask for [The Manchester Friend] the support of those who, with widely divergent opinions, are united in the belief that dogma is not religion, and that truth can only be made possible to us where perfect liberty of thought is conceded. We ask for it also the support of those, who, recognizing this, feel that Christianity is a life and not a creed; and that obedience to our knowledge of what is pure and good is the end of all religion. We may fall below our ideal, but we shall try not to do so; and we trust our readers will, as far as they can, aid us in our task.” [#15]

[Also mentioned: George S. Brady (1833–1913).]

Progressive and Congregational Friends

The Progressive Friends at Longwood (near Philadelphia) were committed to peace, and the rights of women and blacks, and were also concerned about church governance and doctrine. . . . Between 1844 and 1874 they separated from other Hicksite Quakers and formed a monthly meeting and a yearly meeting. They asked, “What right had one Friend, or one group of Friends, to judge the leadings of others?” [#16] They objected to partitions between men’s and women’s meetings and the authority of meeting elders and ministers over the expression of individual conscience and other actions of the members. There were similar separations in Indiana Yearly Meeting (Orthodox) in the 1840s, Green Plain Quarterly Meeting in Ohio in 1843 and in Genesee Yearly Meeting (Hicksite) in northern New York and Michigan and in New York Yearly Meeting in 1846 and 1848.

A Congregational Friend in New York declared,

“We do not require that persons shall believe that the Bible is an inspired book; we do not even demand that they shall have an unwavering faith in their own immortality; nor do we require them to assert a belief in the existence of God. We do not catechize men at all as to their theological opinions. Our only test is one which applies to the heart, not to the head. To all who seek truth we extend the hand of fellowship, without distinction of sex, creed and color. We open our doors, to all who wish to unite with us in promoting peace and good will among men. We ask all who are striving to elevate humanity to come here and stand with us on equal terms.” [#17]

In their Basis of Religious Association Progressive Friends at Longwood welcomed “all who acknowledge the duty of defining and illustrating their faith in God, not by assent to a creed, but lives of personal purity, and works of beneficence and charity to mankind.” They also wrote,

“We seek not to diminish, but to intensify in ourselves the sense of individual responsibility. . . . We have set forth no forms or ceremonies; nor have we sought to impose upon ourselves or others a system of doctrinal belief. Such matters we have left where Jesus left them, with the conscience and common sense of the individual. It has been our cherished purpose to restore the union between religion and life, and to place works of goodness and mercy far above theological speculations and scholastic subtleties of doctrine. Creed–making is not among the objects of our association. Christianity, as it presents itself to our minds, is too deep, too broad, and too high to be brought within the cold propositions of the theologian. We should as soon think of bottling up the sunshine for the use of posterity, as of attempting to adjust the free and universal principles taught and exemplified by Jesus of Nazareth to the angles of a manmade creed.