Sam's news

Here are some of the news sources I follow.

My main website is at https://samwilson.id.au/.


closed note (near -31.989, 115.754)

Published 24 Jun 2018 by Sam Wilson in OpenStreetMap Notes.

Comment

Resolved about 20 hours ago by Sam Wilson
Fixed via survey.

Full note

Created 8 months ago by TheSwavu
Delamare Lane The above "street" has appeared in the G-NAF database between the May 2016 and August 2017 editions but is currently not in OSM (or has a different name) Before mapping the following will be required: 1. Confirm that this is a real street (or which street has changed names) 2. Geometry from imagery, MRWA-514, or survey 3. Confirm name from a source (MRWA-514 or survey) we are allowed to use in OSM #newstreet #wa #2017-08
Updated about 23 hours ago
I confirm this name: https://flic.kr/p/JPTWog
Resolved about 20 hours ago by Sam Wilson
Fixed via survey.

new comment (near -31.989, 115.754)

Published 24 Jun 2018 by in OpenStreetMap Notes.

Comment

Updated about 23 hours ago
I confirm this name: https://flic.kr/p/JPTWog

Full note

Created 8 months ago by TheSwavu
Delamare Lane The above "street" has appeared in the G-NAF database between the May 2016 and August 2017 editions but is currently not in OSM (or has a different name) Before mapping the following will be required: 1. Confirm that this is a real street (or which street has changed names) 2. Geometry from imagery, MRWA-514, or survey 3. Confirm name from a source (MRWA-514 or survey) we are allowed to use in OSM #newstreet #wa #2017-08
Updated about 23 hours ago
I confirm this name: https://flic.kr/p/JPTWog
Resolved about 20 hours ago by Sam Wilson
Fixed via survey.

cardiCast episode 33 – Eddie Marcus

Published 24 Jun 2018 by Justine in newCardigan.

Perth April 2018 cardiParty

Recorded live

Eddie Marcus (the sharp mind behind the Dodgy Perth blog) takes us on the shortest heritage pub trail ever, exploring the Greek and Roman architecture of three iconic Northbridge pubs.

There are arches, architects and anecdotes on this entertaining meander through the history of Perth architecture and pubs.

The three stops on this walking tour are The Brass Monkey, PICA Bar, and The Court. Each of the pubs form part of the Heritage Perth pub trails app available to download on Apple devices.

Bad language warning: please note that there are some swear words throughout this episode.

 

newcardigan.org
glamblogs.newcardigan.org

Music by Professor Kliq ‘Work at night’ Movements EP.
Sourced from Free Music Archive under a Creative Commons licence.


Day 8: Ciao Italy!

Published 23 Jun 2018 by legoktm in The Lego Mirror.

Part of a series on my journalism faculty-led program through Italy and Greece.

Today was effectively our last day in Italy. Tomorrow we're flying to Athens, and will spend the rest of our time in Greece.

We went to see the Valley of the Temples, which, surprise surprise, is not actually a valley! It's on a mountain ridge, which actually makes for better views and sights than if it were in a valley.

My favorite part was probably the statue of Icarus. Yes, I know it's a modern creation, but it's still incredibly cool. And the juxtaposition of the Template of Concordia behind it was fantastic.

Icarus

I have more things I need to write, but I need a bit more time to collect my thoughts, and it's already late. So hopefully I'll write them tomorrow (from Greece!). Ciao Italy!


How to set up database(s) for a wiki family?

Published 23 Jun 2018 by Pyro Newman in Newest questions tagged mediawiki - Stack Overflow.

I have some issues setting up a wiki family with MediaWiki. Currently, I am working on two wikis with the same global settings, Penguin Ice Wikis and Penguiconverter. I also am not able to edit most of the Apache on the web host I run my wiki on. The users at MediaWiki's support desk told me to point the document roots of both wikis at a single place. However, when I make edits to the Penguiconverter wiki, the edits will also appear on the Penguin Ice Wikis wiki as well.

I added the upload location. When I did a test edit on my Penguiconverter wiki, the edit showed up on my Penguin Ice Wikis wiki. What can I do to make pages and edits separate on each wiki when they are pointed at a single place? I also added the database for Penguiconverter, and I got the following message: (Cannot access the database: Access denied for user 'gjlxrtap_penguinicewiki'@'localhost' (using password: YES) (localhost))

How do you fix both issues mentioned in the last paragraph?


Day 7: A short break

Published 22 Jun 2018 by legoktm in The Lego Mirror.

Part of a series on my journalism faculty-led program through Italy and Greece.

Today closes the first week of our trip, and all I want now is a break. I don't really feel physically tired, mostly just emotionally drained. I'm also running out of Skittles.

We spent most of today just working on existing source material that we had. I finished up a text story, a photo essay, and planned out our natural sound video. I also finally had some time to do my laundry, right as my supply of clean clothes began to run out :)

I had lunch at a nice wine bar that I found on the street. The food prices were pretty cheap, but I assume they expected everyone to buy wine along with their meal (I didn't!). Dinner on the other hand was a disaster, and probably the worst meal I've had in Italy :-(

In the evening there was a celebration of World Refugee Day, with musical, dancing, and acting performances from some of the refugees and migrants. We watched as three of them read out some important parts of the Universal Declaration of Human Rights - it was incredibly moving. I also think it's a testament to Mr. Edwards, who taught me the UDHR so well that I instantly recognized it in Italian.

Looking forward, we have another day in Italy before we head to Greece. It's supposed to mostly be a tourism day, which I hope allows me to collect my thoughts before we jump right back into the fray.

I think the biggest frustration I have right now is that journalists are supposed to stay impartial (rightfully so), while I want to do things, and make change happen.

New segment: things people needed today that I was unprepared for, and didn't have in my backpack: fork/silverware, and napkins.


How 2,000 Droplets Broke the Enigma Code in 13 Minutes

Published 22 Jun 2018 by TC Currie in The DigitalOcean Blog.

How 2,000 Droplets Broke the Enigma Code in 13 Minutes

In late 2017, at the Imperial War Museum in London, developers applied modern artificial intelligence (AI) techniques to break the “unbreakable” Enigma machine used by the Nazis to encrypt their correspondences in World War II. Using AI processes across 2,000 DigitalOcean servers, engineers at Enigma Pattern accomplished in 13 minutes what took Alan Turing years to do—and at a cost of just $7.

I have long been fascinated by the Enigma machine and its impact on World War II. Aside from being a huge history geek, my father-in-law went over to Normandy on D+3 (three days after the Omaha beachhead was established). He served in an advance corps, finding ways for the army to move across the country, and as such, they were the first to come across one of the concentration camps and liberate it. None of that would have been possible without Enigma.

The Enigma Machine

The Enigma machine is a complicated apparatus consisting of a keyboard, a set of rotors, an alphabet ring, and plug connections, all configurable by the operator. For the message to be both encrypted and decrypted, both operators had to know two sets of codes. A daily base code, changed every 24 hours, was published monthly by the Germans. Then, each operator created an individual setting used only for that message. The key to the individual code was sent in the first characters of the message, coded in the base code. This created over 53 billion possible combinations, changing every 24 hours. Because of this, the machine was widely considered unbreakable.

Marian Rejewsky, working with other mathematicians at the Polish Cipher Bureau, cracked an early version of the Enigma machine in 1932 by the tried-and-true method of stealing a few machines and reverse engineering the mechanism. It took him just under a year to figure out the general principle of the German military’s double message setting and the wiring of the rotors, and another year to catalog the settings. After all of that, daily keys could be obtained in under 20 minutes.

But as Germany revved up its war machine, the Nazi navy made the machine more complex with the addition of plugs and more rotors, making it impossible for humans to work through the billions of possible combinations. Enter Bletchley Park in rural England, where Alan Turing, a brilliant English mathematician, gathered a team of cryptographers, puzzle solvers, linguists, and mathematicians in 1939 with the mission of breaking the German codes.

“Enigma gave the foundation to Alan Turing to develop the computer,” explained Rafal Janczyk, a Polish mathematician and CEO and co-founder of Enigma Pattern.

Rejewsky and his team smuggled their cracked Enigma machines out of Poland, and worked their way to Bletchley Park where they donated the machines and their expertise to Turing. Building on Rejewsky’s work, Turing was able to automate the cryptography that could crack the daily code. It took the better part of a year to decrypt their first message. They called their work the Bombe, and it’s widely considered to be the first computer.

But it was more elaborate than simply breaking the code. Because the Nazis changed the rotor settings every 24 hours, each new day brought a new set of 15,354,393,600 password variants that had to be decrypted. Many times they worked through the night only to fail to break the code and have to start over the next day.

It was an exhausting, near-impossible task. And, seven decades later, Enigma Pattern wondered how modern technology like AI could change things, and if they could break the code in a fraction of the time.

Geeking out: Breaking Enigma with Modern AI

“The project started from the question, ‘What would Alan Turing be able to do nowadays if he had the current computing power and all the development around AI,’” said Janczyk. Since AI is still such a new discipline, the company allows their employees to spend 20 percent of their time on side projects of their choice that encourage out-of-the-box uses of AI.

Retracing Turing’s footsteps was a pet project of Lukasz Kuncewicz, Enigma’s Head of Data Science (and another Polish mathematician co-founder). Kuncewicz chose this project to refer to the common history of Brits and Poles using human intelligence to overcome the biggest obstacles of the Second World War. (Their third co-founder, Mike Gibbons, is British).

Kuncewicz decided to recreate the Nazi navy’s version of the machine, which was the most sophisticated. His team started by recreating the machine, rotors, and plugs in Python. Initially, they tried to teach their AI to decode the Enigma code itself, but it didn’t work. Neither did Lambda functions from Amazon.
The problem, he said, was with the amount of computations. “Since the Lambda function from AWS is not very quick, and has some limits regarding execution time, the number of concurrent Lambda calculations was very high. So high that we actually spent more than a week going from one AWS department to another, trying to squeeze a decision from them regarding extending our limit.”

Enter DigitalOcean. “We only use [DigitalOcean] for quick ‘bish bash bosh’ needs—they are very good when we need to have a bigger server run for a few hours,” he said. Enigma Pattern uses DigitalOcean for a variety of things—from learning environments, to quick compute tasks where results will be stored on their internal computers, to prototyping projects when they're not sure yet how many machines will be needed.

When Enigma mentioned the project, DigitalOcean quickly agreed to provide the ML 1-Click Droplets. It fit the company’s developer focus, said Mark Mims, the R&D Engineer who designed the ML 1-Click that launched last year, and demonstrated the ease of use, as an ML 1-Click Droplet can be spun up in a few minutes with (you guessed it) one click. “But if you’re looking to spin up 2,000 servers, you won’t be using the web UI,” said Mims. “That takes a call to the help desk.” Within half a day, DigitalOcean had hydrated the 1,000 droplets used in the testing phase.

The next step for Kuncewicz and his team was training an algorithm to recognize German, which they did by using Grimms Fairy Tales, including Hansel & Gretel, Rapunzel, Cinderella, and Rumpelstiltskin; 200 tales in all. Why children’s stories? Well, it’s not like the AI had to decrypt German philosophy, but instead military telegraphs, which use as few words as possible. Fairy tales are also written in simple language, so it makes sense. And it worked. Interestingly, in the end the AI could not understand German. But it did what machine learning does best: recognize patterns.

How 2,000 Droplets Broke the Enigma Code in 13 Minutes

It took two weeks for the team to train the machines and create the Python code, and another two weeks for the first successful attempt to decrypt a message. But in order to copy Turing’s success, a successful decryption had to be done in less than 24 hours.

Then they decided to try to break it by using sheer computing power, adding another 1,000 Droplets. I’ll let Kuncewicz explain the details:

“First,” he said, “one has to accept the fact, that even if you have 2,000 Droplets, you still have billions of combinations to be checked. And the neural network that we used, however good at spotting the German language, is not a speed demon.

“It's because it uses recurrence, which gives you this boost when dealing with languages, but you pay with the calculation time. So the idea is, you need to separate the wheat from the chaff, and use the network only to check the best possible candidates.

“So for the AI to shine, we actually use 2,000 minions that do the tedious work. Everybody praises AI, but it's actually the minions that do the 99% of work. Life, right?”

“We wrote one minion in Python, and DigitalOcean has this very nice API for storing images. So you create one minion, say ‘DigitalOcean, please save it as an image,’ and then you say ‘DigitalOcean, please create 2,000 copies of it and make them run,’ and you have them.

“The code is really simple. It connects to the bus and gets a first not-yet-taken assignment. The assignment is a package of the gibberish text (the encoded message) and combinations of passwords to run on it. It checks the gibberish against every password, checks if the decoded message sounds like German, and if so, sends it through the same bus for more detailed inspection by the AI.

“And this is exactly what the Droplets do. They get their share of password combinations from RabbitMQ, they take a few letters of the gibberish they need to decode, they decode it using the given passwords, and apply a very crude (but very quick) check if at the end of this pipeline we have something that resembles German.”

If the code looks like German, it’s pushed back to the main server where the AI works its magic.

“The job is not coordinated in any way, each minion doesn't know anything about others—they are fully autonomic. This is great, because it means that we can have 200, 2,000, or 20,000 of them if we like (and if DigitalOcean allows). The more we have, the less time will pass before breaking the Enigma code.”

The 2,000 virtual servers ran through 41 million combinations per second. After 13 minutes of minion work, boom! The new Bombe had broken the code.

How 2,000 Droplets Broke the Enigma Code in 13 Minutes

Enigma Pattern: Who are these People?

“AI is being called the new electricity,” said Janczyk, “because it will be in everything.” Enigma Pattern works with companies that already collect big data but are unsure of the ways to harness its power. “You would be surprised at how many companies store big data but don’t know how to put it to use,” he said. “For example, a coffee chain would rather throw up a new store than delve through the data to determine how to optimize the stores they already have, because they know how to open a new store and don’t know how to dig through the data.”

One of their clients has a fleet of over 10,000 cars on which they collect a variety of raw data. Janczyk and his team sat down with the client to discuss the pain points of the business, how they might use the data they already had to help ease the pain, and how AI could help.

Tires are a significant business cost. In addition to the price of the tires is the cost of maintenance and driver downtime. If you don’t change the tires in time, you’re endangering the life of your drivers. Change them too often, and you lose money. It turns out, you can teach a machine to hear the level of wear on a tire.

“Out of the sound of the spinning tire, we were able to teach the machine the level of wear of the tire,” Janczyk said. “Now the company is able to change tires based on the sound of the wear and automatically schedule downtime to which saves lives and money.”

“With AI and ML, there is such an unlimited amount of possibility, which is what makes it so exciting,” said Janczyk. “That’s what makes my work fascinating,” he said, "finding new uses for AI.”

Who knows what mysteries AI will solve in the future? By appreciating the problems that Enigma presented to previous generations and applying modern techniques, we can expand our vision for what AI can accomplish in today’s world.

To see how Enigma functioned, check out this link or watch it in action on YouTube.

To learn more about Alan Turing and the work done at Bletchley Park, check out Andrew Hodges’ acclaimed biography of the computing legend, titled “Alan Turing: The Enigma.”

You can check out Enigma Pattern's code on GitHub, with a warning from Kuncewicz that it’s a bit messy.

TC Currie is a journalist, storyteller, data geek, poet, body positive activist and occasional lingerie model. After spending 25 years in software development working with data movement and accessibility, she wrote her first novel during National Novel Writing Month and fell in love with writing.


Day 6: Unprepared for heartbreak

Published 22 Jun 2018 by legoktm in The Lego Mirror.

Part of a series on my journalism faculty-led program through Italy and Greece. For privacy reasons, I've changed the name appearing in this post.

In Boy Scouts, we were always taught to Be Prepared. And I thought I had prepared for everything (first aid kit, flashlights, emergency food, etc.) before heading on this trip – that is, until I met Joshua. Joshua is a 17 year old refugee from Sierra Leone.

By his own account, Joshua was a slave. He “worked” for a man who did not pay him. He was just given food, and only once something broke or tore was he given a replacement. He did all the work, but reaped none of the benefits.

Joshua ran away, traveled 150 kilometers on foot, and then worked odd jobs to get rides to cover the rest of the distance. While in the desert, he said some of his friends were bitten by animals and died. He now stays in a center in Italy with other unaccompanied minor boy refugees, living a typical teenage life.

He unsuccessfully tried out for the nearby club soccer team, but was happy for his three friends that made it. Since they’re not adults yet, the boys are required to go to school, and are taken care of by a house-mother who runs the center.

Joshua’s English is better than his Italian these days – he listens to American music, and likes Selena Gomez, but doesn’t care much for Taylor Swift.

But just when you think he might be a normal teenager, you notice the pain in his voice that sets him apart. Joshua looked at us if we were crazy when we asked him if he ever skipped school – he quickly said he wasn’t going to waste any opportunity given to him, especially school.

Since he’ll be an adult soon, Joshua told us he wanted to be a painter. At first we thought he wanted to follow in the footsteps of famous Italian painters like Da Vinci or Michelangelo, but he meant something else entirely.

Joshua wants to paint buildings, like general contractor might. He started to explain to us the different types of stucco and how he would paint them.

We asked if Joshua had any higher aspirations or a dream job, to which he had a simple response: “I will take any job that I can get.”

He credits God with keeping an eye out for him, and is a devout Christian. He attends Church every Sunday, and even joked that it’s just him and “old people”.

But Joshua said that he knows he cannot expect God to provide everything for him – he needs to continue to work hard and take advantage of what’s given to him. He has his official refugee papers, which should make it easier for him to stay in the country and get a job.

Despite all the help and support he’s received in Italy, for which he said he is extremely grateful, he wants to eventually go somewhere else. You see, when we first introduced ourselves and told him we were from California, his face lit up, and he immediately exclaimed, “America! The best country on Earth!”

I did not want to crush his dreams, nor lie to him, so I kept quiet. But that, that was when my heart broke.


cardiParty 2018-07 Melbourne with Adrian Doyle

Published 22 Jun 2018 by Hugh Rundle in newCardigan.

cardiParty 2018.07 (Melbourne) with Adrian Doyle

Join us for a talk with artist Adrian Doyle about his exhibition ‘You Are All The Same!’ and his art practice, at Dark Horse Experiment.

Find out more...


How to add custom function ot User.php in MediaWiki?

Published 21 Jun 2018 by Erik L in Newest questions tagged mediawiki - Stack Overflow.

I added the following custom function to User.php:

public function isUpgraded() {
    return true;
}

and the following in my Foreground.skin.php which is my skin/theme for the wiki to simply access the value that the function returns:

$isUpgraded= $wgUser->isUpgraded();

but I get the following exception:

Fatal error: Uncaught Error: Call to undefined method User::isUpgraded() in /home/siteX/public_html/siteX.com/wiki/skins/foreground/Foreground.skin.php:106 Stack trace: #0 /home/siteX/public_html/siteX.com/wiki/includes/skins/SkinTemplate.php(251): foregroundTemplate->execute() #1 /home/siteX/public_html/siteX.com/wiki/includes/OutputPage.php(2388): SkinTemplate->outputPage() #2 /home/siteX/public_html/siteX.com/wiki/includes/exception/MWExceptionRenderer.php(135): OutputPage->output() #3 /home/siteX/public_html/siteX.com/wiki/includes/exception/MWExceptionRenderer.php(54): MWExceptionRenderer::reportHTML(Object(Error)) #4 /home/siteX/public_html/siteX.com/wiki/includes/exception/MWExceptionHandler.php(75): MWExceptionRenderer::output(Object(Error), 2) #5 /home/siteX/public_html/siteX.com/wiki/includes/exception/MWExceptionHandler.php(149): MWExceptionHandler::report(Object(Error)) #6 /home/siteX/public_html/dev. in /home/siteX/public_html/siteX.com/wiki/skins/foreground/Foreground.skin.php on line 106

Do I need to register the custom function in some other file in order for it to work? It might be worth mentioning that the code worked in MediaWiki 1.23 but I recently updated to 1.31 and can't get the following piece of code to work.


PHPWeekly June 21st 2018

Published 21 Jun 2018 by in PHP Weekly Archive Feed.

PHPWeekly June 21st 2018
Curated news all about PHP.  Here's the latest edition
PHP Weekly 21st June 2018
Welcome to the latest @phpweekly news.
 
Voting takes place next month on the nominated candidates for the Drupal Association Board. Check out each candidate profile before you make a decision on who gets your vote.
 
Also this week we take a look at the Auth0 Service, allowing you to set up authentication and authorisation features for your apps.
 
php[world] has been announced for November this year in Washington, bringing together several frameworks including Magento, WordPress and Laravel. The Call for Papers is now open.
 
And finally, the latest Full Stack Radio podcast has Derrick Reimer talking about his new communication platform, Level.
 
Have a great weekend,
 
Cheers
Ade and Katie

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.

Articles

Apache vs Nginx Performance: Optimisation Techniques
Some years ago, the Apache Foundation’s web server, known simply as “Apache”, was so ubiquitous that it became synonymous with the term “web server”. Its daemon process on Linux systems has the name httpd and comes preinstalled in major Linux distributions. Nginx — pronounced engine x — was released in 2004 by Igor Sysoev, with the explicit intent to outperform Apache.

What is WordPress Hosting? Learn More About The World's Most Popular CMS
WordPress has been around since 2003 and is the most popular blogging software on the market, powering almost a third of the known web. It has also now established itself as the content management system (CMS) of choice. Wordpress is already powering a quarter of all websites and we predict no end to its popularity and have explored some of the issues surrounding hosting with WordPress below.

Is Your PHP e-commerce Site Doing Well Enough? The SEO Factors You Should Prioritise to Boost Conversions
The presence of numerous website builders and CMS platform options makes setting up a store online a lot easier than it was before. Therefore, we see a lot of new spicks and span new e-commerce stores with a wide array of products but no customers. Many business owners are often too quick to blame their website template and core code for the lack of traffic. Nonetheless, how responsible can your code infrastructure be if several other stores are using the same code type or the same template?

Meet the Drupal Association 2018 At-Large Board Member Candidates
Did you know you have a say in who is on the Drupal Association Board? Each year, the Drupal community votes in a member who serves two years on the board. It’s your chance to decide which community voice you want to represent you in discussions that set the strategic direction for the Drupal Association.

Tutorials and Talks

Building a PHP Framework: Part 5 – Test Driven Development
In part 4 we laid the foundation for Analyze. Now it’s time to write the very first lines of code!
 
Collector Pattern for Dummies
I wrote Why is Collector Pattern so Awesome a while ago, but I got feeling and feedback that it's way too complicated. The pattern itself is simple, but put in framework context, it might be too confusing to understand. That's why we look on collector pattern in minimalistic plain PHP way today.
 
Creating a Decent Laravel Deploy Script
A good deploy script can save you time and speed up you application, and it only takes a few minutes to set one up. I have a standard deploy script which I use for almost all of my projects, which I'm going to break down and share with you.
 
Doctrine ORM and DDD Aggregates
As I discovered recently, you don't need an edge case to drop Doctrine ORM altogether. But since there are lots of projects using Doctrine ORM, with developers working on them who would like to apply DDD patterns to it, I realized there is probably an audience for a few practical suggestions on storing aggregates (entities and value objects) with Doctrine ORM.
 
Building an Image Gallery Blog with Symfony Flex: The Setup
This article is part of a zero-to-hero project - a multi-image gallery blog - for performance benchmarking and optimisations. (View the repo here.) In this part, we’ll set our project up so we can fine tune it throughout the next few posts, and bring it to a speedy perfection.
 
Scheduling Posts on Github Pages with AWS Lambda Functions
If you are reading this post, it means it worked! I scheduled this post yesterday to automatically publish at 9am the next day, PDT. I’ve been trying to find a solution for this a few times, but most recently realized that with the AWS Lamdba functions it might have finally become possible to do this without managing a whole server.
 
Using API Gateway with Serverless & OpenWhisk
As with all serverless offerings OpenWhisk offers an API Gateway to provide HTTP routing to your serverless actions. This provides a number of advantages over web actions, the most significant of which are routing based on HTTP method, authentication and custom domains (in IBM Cloud).
 
Self-Host Your Team’s Git With Gitolite
Designed in 2005 by Linus Torvalds for the needs of the Linux Kernel development team, the Git source code management system has become widely accepted outside the community. For more info check out A Short History of Git. Free, fast, distributed, feature-rich, and yet simple to use, it has become almost indispensable today for storing, comparing, and collaborating on all types of programming projects, and even for other kinds of documents.
 
Authentication and Authorization Using Auth0 in PHP
In this article, we're going to explore the Auth0 service, which provides authentication and authorization as a service. Auth0 allows you to set up basic authentication and authorization features for your apps in the blink of an eye.
 
5 Usages of Static Keyword in PHP
Static is a PHP keyword with many usages. It is almost universally used, though there are many variations of it. Let’s review all five of them.
 
Automatically Open Files on Artisan “Make” Commands
“Open on Make” is a neat little package by Andrew Huggins that makes it easy to have newly created files open in your editor of choice.
 
The Dangers of PHP's $$
A PHP question I particularly like to ask candidates at a job interview is to explain a bit of code that includes the $$ syntax for variable variables. It’s great if the candidate is already familiar with this feature of PHP; but what’s more important to me is that once the candidate understands how this syntax works that they can describe potential issues with using it.
 
How to Quickly Fix WordPress Mixed Content Warnings (HTTPS/SSL)
Running your WordPress site over HTTPS is no longer optional. 🔒 Not only is it more secure (everything is encrypted, nothing passed in plain text), but it also builds trust, is an SEO ranking factor, and provides more accurate referral data. Performance issues tied to encryption have been fixed for the most part thanks to HTTP/2 and Let’s Encrypt has changed the entire industry by providing you with an easy way to get free SSL certificates.

News and Announcements

Announcing Ensemble
Bringing your Composer dependencies together.

Laravel Conf - July 8th 2018, Taiwan
As the biggest PHP and Laravel community in Taiwan, we are proud to announce LaravelConf Taiwan will take place on July 8, 2018. Come and enjoy inspirational talks and making friends with enthusiastic developers like you!

Northeast PHP Conference - 19th-21st September 2018, Boston
Our event is a community conference intended for networking and collaboration in the developer community. While grounded in PHP, the conference is not just about PHP. Talks on web technology, user experience, and IT management help PHP developers broaden their skill sets. Tickets are on sale now.

Symfony Live - September 27-28th 2018, London
Symfony is proud to organise the 7th edition of the British Symfony conference and to welcome the Symfony community from all over the UK. Join us for 2 days of Symfony to share best practices, experience, knowledge, make new contacts and hear the latest developments with the framework! The Call for Papers is open for another few days, and Early Bird Tickets are on sale now.

php[world] - November 14-15th 2018, Washington DC
PHP as a language and a community has been rapidly changing in the last few years. A staggering 83% of the Web runs on PHP, and those websites are built on frameworks such as Drupal, WordPress, Magento, Symfony, ZF and Laravel, each of which has their own strong community. We created a conference designed to appeal to all these communities and bring them together. Hence, php[world] was born. The Call for Papers is now open.

Podcasts

Three Devs and a Maybe Podcast - PHP Was Not Designed For That?! with Joe Watkins
In this weeks episode we catch-up with Joe Watkins. We start off discussion with a recent blog post he wrote about the unhelpful ‘just because you can, doesn’t mean you should’ response he sees surrounding some of his PHP extensions. From here we move on to highlight a debugger you can ‘composer require’, reasons behind creating such a tool and how it works. This leads us on to mention some updates to uopz for PHP 7 support, a weak references RFC he has recently published and future plans for PHP. Finally, we wrap up by talking about a CommonMark extension he has published, and how CQL provides the ability to efficiently traverse a document.
 
Full Stack Radio Podcast Episode 91: Derrick Reimer - Designing a Calmer Team Communication Platform
In this episode, Adam talks to Derrick Reimer about the product design decisions behind Level, a new team communication platform Derrick is building. They also talk about Derrick's decision to open-source the entire codebase, despite the fact that he's building a real business around it.
 
Topics include Microsoft has reportedly acquired GitHub, and the stakes have never been higher for Apple software.
 
Post Status Draft Podcast - Productizing Your Service Business, with Brian Casel
In this episode, I interview Brian Casel, the owner of Audience Ops, a productized content service he’s built to employ 30 people. He also runs a podcast and course on helping others productize their services.
 
PHP Web Development Podcast Ep #3 - Why Symfony
This week I have the pleasure of speaking to Dan Blows ,Currently a tech lead, he has done some really cool stuff, he's in the top 3% on stack overflow, in the top 10% of PHP developers in Europe, speaking at meet ups , spoke in the European conference in front of 100's of people, training and mentoring junior developers and many more.

Reading and Viewing

Cloudways Interview - Ilona Filipi Sees “More and More Clients Opting for WordPress”
Today we have interviewed Ilona Filipi, founder and managing director at Moove Agency, a London-based web development agency that builds and supports high-performing websites and applications and make global businesses succeed in the digital world. She has an extensive experience of 10 years in the digital field and a working relationship with clients such as BBC, O2, etc.
 
Bizarro Devs - Send In The Drones
A weekly newsletter with all the weird and wonderful tech news.
 
Query builder for JSON, Zend API wrapper for go/golang, modern dockerized LAMP and MEAN stack alternative to XAMPP, simple chat with ReactPHP sockets, searchable field-level encryption for PHP projects, and more. Keep on reading!
 
Fix WordPress Plugins and PHP 7.1 Brokenness.: Mobile-first WordPress Speed (Plugin Surgery: Tips and Tricks Book 3) by Steve Teare - Kindle Edition
A 3,760-word article at WP Elevation is about the pain of producing websites. The article expresses everything we hate about website creation. The thought of building “explosive live hand grenades” stresses us. Just reading the article was stressful. Why?
 
PHP The Right Way: Your Guide to PHP Best Practices, Coding Standards, and Authoritative Tutorials by Phil Sturgeon and Josh Lockhart - Kindle Edition 
If you are getting started with PHP, start with the current stable release of PHP 5.6. PHP has added powerful new features over the last few years. Though the incremental version number difference between 5.2 and 5.6 is small, it represents major improvements. If you are looking for a function or its usage, the documentation on the php.net website will have the answer.
 
PHP 7 Data Structures and Algorithms Complete Self-Assessment Guide by Gerardus Blokdyk - Kindle Edition 
Why are PHP 7 Data Structures and Algorithms skills important? Has the direction changed at all during the course of PHP 7 Data Structures and Algorithms? If so, when did it change and why? Who are the PHP 7 Data Structures and Algorithms improvement team members, including Management Leads and Coaches?

Jobs





Do you have a position that you would like to fill? PHP Weekly is ideal for targeting developers and the cost is only $50/week for an advert.  Please let me know if you are interested by emailing me at katie@phpweekly.com

Interesting Projects, Tools and Libraries

firefly-iii
"Firefly III" is a (self-hosted) manager for your personal finances.
 
skeleton-php
A skeleton repository for my packages.
 
atutor
An Open Source Web-based Learning Management System (LMS) used to develop and deliver online courses.
 
sublime-php-grammar
An smart macro PHP plugin for Sublime Text.
 
BehatNoExtension
This Behat extension makes it possible to extend Behat without having to write an extension yourself.
 
php-verge
A basic PHP library to talk to a VERGEd daemon to get you started in your VERGE project!
 
yay
YAY! is a high level parser combinator based PHP preprocessor that allows anyone to augment PHP with PHP.
 
money
A money and currency library for PHP.
 
bladeone
BladeOne is a standalone version of Blade Template Engine that uses a single PHP file and can be ported and used in different projects.
 
tus-php
A pure PHP server and client for the tus resumable upload protocol v1.0.0.
 
catalyst
Catalyst serves to facilitate the process of commissioning through a simple, unified, and mobile-friendly way for artists to easily list their prices, recieve and track commissions, and much more.
 
soap-client
Sick and tired of building crappy SOAP implementations? This package aims to help you with some common SOAP integration pains in PHP. Its goal is to make integrating with SOAP fun again!

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.
 

So, how did you like this issue?

Like us on FacebookFollow us on Twitter
We are still trying to grow our list. If you find PHP Weekly useful please tweet about us! Thanks.
Also, if you have a site or blog related to PHP then please link through to our site.

unsubscribe from this list | update subscription preferences 
 
Copyright © 2018 PHP Weekly, All rights reserved.
Email Marketing Powered by MailChimp

The Inevitable

Published 20 Jun 2018 by jenimcmillan in Jeni McMillan.

NudeSamothraki2018.jpg

Today I climb a mountain on this remote Greek Island. Beyond the source of the waterfalls, lizards cling to the cliff faces. I test each hand hold before I give my weight to the mountain. The hard volcanic rock has been broken into sharp and unstable shards by the winter elements. Only the lichen-covered rocks are stable. I pick my path. The sky is racing past. A rush of adrenalin hits me. I consider the possibility that I could die here. Why not? It’s a beautiful place where I am completely at peace.

I see a species of ants that I know well from the Australian bush. We have history. Once I saw them carry away bones from a snake carcass. I’ve stood barefoot on their mounds for a dare. They don’t sting but their meat-eating preference makes this a good test of endurance. Sure, it’s crazy, but I had time and it was the days before I carried a laptop and had 305 Facebook friends. Today I feel only completeness. This is not an Italian drama. Perhaps it’s a Greek tragedy? Except there is no family gathering at my feet. I’m grateful. They need a wash.


Day 5: Exhaustion (only a bit)

Published 20 Jun 2018 by legoktm in The Lego Mirror.

Part of a series on my journalism faculty-led program through Italy and Greece.

It's day 5, I did not expect to be this tired, so quickly. I think I'm missing the siesta time that I'm supposed to get in Italy. But, I'm also having a blast, and at least trying to take every opportunity to explore / learn things. It feels waaaay longer than 5 days.

Today we started out heading to the university, saw some awesome street art (I'm going to be doing a "photo essay" on what we've seen so far) on the way there, and then double backed to get a few photos.

We walked a bit around the port and docks area, picking up b-roll and talking to some people. We went up to the tents where migrants are brought to after getting off the boat that brings them in. To be honest, it was pretty underwhelming.

We had lunch at a nice panini place, drank some Fanta, and then headed back to the hotel to start putting together our assignments. I think our group has started to get into rhythm - I do the writing parts, another person does images, and the last does the photos (generally, it's more nuanced than that in reality), and then we all edit each other's stuff. So far it seems to be working.

There's a bigger meta question starting to loom over my head now: if this is what real reporting work is like, do I want to do this as a career? To which my answer is still the same so far: "Maybe." I think I need more time to make a decision.


The Next Wave: DigitalOcean’s New CEO

Published 20 Jun 2018 by Ben Uretsky in The DigitalOcean Blog.

The Next Wave: DigitalOcean’s New CEO

A few months ago, I announced my plans to find my successor as we approached our next phase of growth. Today, I am very excited to announce that Mark Templeton will be joining us as DigitalOcean’s new CEO.

We were looking for a leader who could scale our operations, evolve our go-to-market strategy, and help us reach our audacious vision of becoming every developer’s cloud platform of choice when deploying software. After spending time with Mark, I knew that he was the perfect fit for us.

From 2001 to 2015, he served as president and CEO of Citrix Systems. While at Citrix, Mark helped grow the business from $15 million in revenue with one product, one customer segment, and one go-to-market path to a global industry leader with more than 100 million users and annual revenue of over $3 billion. He joined the company prior to its initial public offering and served in a leadership capacity throughout his 20-plus years with the organization. Under his leadership, Citrix earned multiple “best places to work” awards and Mark himself was honored with several awards including a coveted spot on Glassdoor’s Highest Rated CEOs list in 2013.

When I asked him what he saw in DigitalOcean, he replied that he was inspired by our unique position in the market and focus on delivering simplicity at scale. He went on to talk about our incredible team, happy customers, and what perfect timing it was to make an enormous impact on the industry. He compared the opportunity to his early days at Citrix — when the company had a singular focus on offering the best remote access technology in the world. That focus was the seed of the Citrix vision of a virtual workplace, inspired by the deep belief that work was not a place.

Mark believes we have an incredible opportunity to serve tens of millions of developers and the digital-first businesses they go on to create. He shares our focus on simplicity and supports our efforts to ensure our cloud is an enabler for a future that accelerates software development and inspires innovation among our customers. Mark will help DigitalOcean operate and scale to a whole new level while doubling down on our strategy to be the world’s simplest cloud experience.

We are so fortunate to gain a leader with Mark’s experience, talent, and vision. DigitalOcean is at an inflection point and we’ve laid the groundwork together for this rocket ship to soar. We now have a $200 million run rate and a community that is more than 3.5 million developers strong. As we enter into our next chapter, I am confident Mark is the right leader to inspire and scale our team, accelerate our business, and most importantly, uphold our commitment to our customers and the developer community at large.

Please join us in welcoming Mark on what promises to be an exciting journey!

Ben Uretsky
Co-Founder, DigitalOcean


Diversity at W3C; launch of TPAC Diversity Scholarship

Published 20 Jun 2018 by Jeff Jaffe in W3C Blog.

Diversity has become a major issue across society. We would like W3C to be a model of supporting diversity. As an international organization we can see the immense value we gain from having expertise from across multiple countries and cultures. Soon 50% of the world will be on the Web. We know we will need to reflect the diversity of the whole of our world as more and more people begin to access, use and continue to create the Web in all its full potential.

During the Spring W3C Advisory Committee Meeting, a panel on diversity focused on progress we have made and how much more is required. I shared graphs on diversity, and the W3C Advisory Committee introduced a Diversity Scholarship.

Content of this post:

Press clippings with diversity headlines

Collected information

Different participants are involved in different ways; with mailing lists and GitHub postings distributed across many locations. As we don’t collect participants’ personal and demographic data to preserve privacy, it was difficult to gather data for different characterizations of diversity, but we were able to focus on gender and geography for several representative bodies (Advisory Board, Technical Architecture Group, W3C Management (W3M)).

W3C Advisory Board

The W3C Advisory Board provides ongoing guidance to the Team on issues of strategy, management, legal matters, process, and conflict resolution. The elected Members of the Advisory Board participate as individual contributors not as representatives of their organizations.

Advisory Board positions are member-nominated. The gender diversity of the AB over the last 20 years has never been great, although the diagram shows that this has been improving recently. We are working on outreach to encourage more women and underrepresented groups to run.

diagram of AB gender spanning 1998-2017

Looking at the Advisory Board by geography, all 9 members during the first two years were from Northern America. Starting in the year 2009 and in subsequent years, we’ve improved the geographical diversity and it’s kept improving over these last few years.

diagram of AB by geography spanning 1998-2017

W3C Technical Architecture Group

The W3C Technical Architecture Group (TAG) is a special working group within the W3C, chartered with stewardship of the Web architecture, to document and build consensus around principles of Web architecture, to resolve issues involving general Web architecture brought to the TAG, and to help coordinate cross-technology architecture developments inside and outside W3C.

TAG positions are member-nominated or appointed by the W3C Director. Gender diversity only improved from 0 to 1 or 2 women on the TAG, and only starting in 2011, in the the 10th year of that group. We need to make more progress here.

diagram of TAG gender spanning 2002-2018

Looking at the TAG by geography, although the Northern American contingent is still pretty strong, improvement in diversity has been happening starting 2013, coinciding with what some in our community have referred to as the TAG reform: outreach from the TAG co-chair and members in search for the right people involved in building the Web in smarter ways.

diagram of TAG by geography spanning 2002-2018

W3C Management

The W3C management team is responsible for the day to day coordination decisions for the team, resource allocation, and strategic planning.

The diagram uses percentages because the number of persons on W3M has changed over the years. There was a period when there was one woman at W3M. It has improved to some extent.

diagram of W3M gender spanning 1999-2017

Looking at W3M by geography is relatively good, still a little tilted towards Northern America, but not as much as before.

diagram of W3M by geography spanning 1999-2017

Concrete next steps to improve diversity at W3C

Promote diversity and inclusion

At its May meeting, the W3C Advisory Committee discussed actions that W3C should take to promote diversity, including:

TPAC diversity scholarship

A W3C Member, Samsung Electronics, proposed and agreed to start funding “Diversity Scholarship” for TPAC attendance, and W3C Members The Paciello Group, Consensus System and Microsoft stepped up to sponsor diversity scholarships.

The Diversity Scholarship includes plane and hotel as eligible expenses.

Applicants must be from a traditionally underrepresented and/or marginalized group in the Web community, including but not limited to: persons identifying as LGBTQ, women, persons of color, and/or persons with disabilities; and be unable to attend without some financial assistance. Applications are due by 15 July, by supplying nominal information to the Team (e.g. under-represented community, which role in what group at TPAC, estimated costs to travel and participate at TPAC, why a subsidy is needed).

We hope to be able to support multiple applicants with scholarships. Therefore, please indicate whether you may be able to potentially share costs on some of the costs that you have mentioned.

W3C Management will decide by 25 July how to allocate the available money based on available funding, number of applicants and information supplied. Applicants will be notified personally and recipients will be able to register in time for TPAC.

If your organization or yourself wishes to become a sponsor, please e-mail us!


The dark side of nature writing

Published 20 Jun 2018 by in New Humanist Articles and Posts.

The recent renaissance in nature writing also revives an overlooked connection with fascism.

Day 4: First round of interviews

Published 20 Jun 2018 by legoktm in The Lego Mirror.

Part of a series on my journalism faculty-led program through Italy and Greece.

Today we went out onto the streets and started looking for people to talk to. Ideally we were looking for people to talk to us on video, but we knew that everyone would not be comfortable with that.

We have two interviews planned for later this week, which should hopefully give us some good content. We struck out a few times when people said no to us - but after we stopped trying, that's when we got the good stuff.

We started to give up and just shoot some b-roll so the day would not be a full failure, when one of the musicians in the University square walked in front of our camera and started playing for a full minute.

And then when we went to lunch, we found an empty pizzaria, ordered, and noticed that the chef was making the pizzas right in front of us. We quickly set up the tripod and started recording. And after we finished and paid, we asked if he'd be willing to do a quick interview, and his son translated for us. All of his responses are in Italian so we're going to have to translate and subtitle them before being able to publish it.

Later in the day we went to Siracusa and Noto for some touristy activities. Both were just incredible. Every time I go and see anything that old, the question in my head always is "What have I done that will last that long?"

My favorite place was the Ear of Dionysius. We played some music (Shake It Off of course) to try and get the acoustics to work, but couldn't. I don't think my phone speaker was loud enough to get it started. It was at this point that other people in my group were disapointed by the guidebook they paid a few euros for, while the Wikipedia articles I was reading had answers to all of their questions. Yay for free knowledge :-)

On Mastodon (h/t Greg) I saw that the Italian Minister of the Interior had called for the registration of all Roma people in Italy, which really saddened me. Coincidentally, tomorrow is the World Refugee Day, and we'll be going to some event for that.


Day 3: Return to the mountain

Published 18 Jun 2018 by legoktm in The Lego Mirror.

Part of a series on my journalism faculty-led program through Italy and Greece.

Today we travelled to a few mountain (relatively) towns in Sicily. Ignoring some of the tourist-y stuff, it really reminded me of Esino Lario. The streets, buildings, and elevation gain were all pretty similar.

I got to do a bit of hiking (and just get out of the tourist trap!) when we climbed up a few hundred steps to the top of a peak where there was a monastary (I think, OpenStreetMap wasn't very clear). The view was beautiful, and the hike was totally worth it. We were only two minutes late meeting up with the group - mostly because we saw a lemon tree and were trying to figure out how to grab one even though the tree was three feet above us (we never did in the end).

We had lunch at a, um, interesting restaurant. I'll leave it at that. The food was great, I love trying out all the different types of pizza here. The crust seemed a lot thicker than the pizzas I had in Rome though.

We walked through a few more towns, and even though our van driver was acting as our tour guide, I had fun trying to read the Italian Wikipedia articles for churches and monuments we saw. Spoiler alert: I just looked at the pictures, I didn't understand 95% of the words.

Oh, and the bar in Godfather 2, and the church they got married? We visited both of those. There was a small statue honoring Francis Ford Coppola, which I submitted to OpenStreetMap.

At the end of the day, we went back to Catania and had dinner at an Irish Pub. Coincidentally, England was playing in the World Cup at that time - it was a nail biter to watch until the very end.

Tomorrow we will be going out into Catania and getting interviews and gathering other source material to use in our video assignments.


I am trying to activate the Vector skin in MediaWiki

Published 18 Jun 2018 by N Vora in Newest questions tagged mediawiki - Stack Overflow.

I have set up a wiki on a hosted server. Whenever I go to the wiki, this is what I see:

enter image description here

I have followed the instructions, but the skin still doesn't get installed. What can I do? Thanks!


Why sci-fi and economics have more in common than you think

Published 18 Jun 2018 by in New Humanist Articles and Posts.

Economics isn’t just about describing the world, but imagining alternatives. Science fiction helps show us how.

Day 2: Maintaining eye contact

Published 17 Jun 2018 by legoktm in The Lego Mirror.

Part of a series on my journalism faculty-led program through Italy and Greece.

Today was just a travel day - we're now in Catania, Sicily. It feels like Italy, but a lot more laid back than Rome. I think I would be pretty laid back too if I lived on an Island.

Our plane

Our plane was named after Mozart? Neat.

Since most of our day was taken up by traveling, moving hotels, etc., we didn't have any specific class events today, so a few of us went out exploring after dinner. We walked through most of the tourist areas, and came to the square with the historical buildings of the University of Catania. There were some collaborative art projects happening, as well as one where it looked like people were just staring at each other. My friend and I went closer, and saw the signs (paraphrased, because I forgot to take a picture):

"Are real human connections dead?" "Spend a minute of uninterrupted eye contact"

There were some empty cardboard seats on the floor, so we sat down, and gave it a shot. One set of people next to us looked like they were having a staring contest, and another couple on the other side were deeply looking into each others souls to the point that it seemed like they were engaged in the most PDA I've ever seen with the least amount of touching.

We set a timer for a minute and started talking. After 10 seconds of talking, my eyes seemed to naturally gaze away - I'd have to consiously force them back. Most of the in depth conversations I have happen online, where there is no eye contact that needs maintaining - so this was pretty different. The timer for a minute went off, but since we were in the middle of our conversation, we kept going for about ten minutes (before we realized we needed to meet back up with our other friend!).

I learned a lot of new things about my friend (we have a lot of things in common!) in those ten minutes, which I doubt I would have ever learned or even asked about throughout our trip together. I also learned a bit about myself - I can maintain eye contact for ten minutes, but it takes a conscious effort. I would not say that "real human connections are dead", but I would question whether eye contact is a relevant indicator. I wonder if anyone has done proper research in this area, like having subjects have a conversation without mandatory eye contact, and then with, and see the effect on recall of conversation topics, as well as emotional feelings throughout the conversation. Something for another day :-)


Day 1: Visiting a migrant camp in Rome

Published 17 Jun 2018 by legoktm in The Lego Mirror.

As part of earning my journalism degree at San Jose State, we're required to study abroad. I'm currently on a faculty-led program (FLP) to Italy and Greece to interview migrants and refugees, conduct interviews, document their situation, and gain real world reporting experience. I will try to blog daily for the next three weeks...we'll see if it lasts!

Our trip started in Rome, Italy. Day 0 was meeting at our hotel, getting dinner together, and then enjoying some gelato. The next morning we visited the Colosseum and a few other tourist attractions before the main event for the day: visiting a camp where migrants were staying. Unfortunately, that's all I can write about for now, as I am still working on a full story to publish about our experience at the camp.

In the evening a few of us went to a rally that was protesting the murder of an immigrant, who was a union leader defending workers rights (at least, that's my understanding). There was also the issue of the anti-immigration views of the minister of internal afairs. Aside from not understanding most of what was being said since it was in Italian, I also struggled since there was no article on Unione Sindacale di Base on Wikipedia. It would be great if someone could write one!

Sign at the rally

Take from the rich, and give to the poor!

Djilba and DDD Perth 2018

Published 17 Jun 2018 by Ashley Aitken in DDD Perth - Medium.

Djilba is the time and theme for DDD Perth 2018

DDD Perth is just around the corner and if you haven’t registered yet, get in quick before tickets sell out, as they did last year, for the best and biggest software community conference in Perth.

The conference is on all day Saturday 4th of August at the Perth Convention and Exhibition Centre with a fantastic line up of speakers and renown keynote speakers from around the world.

Djilba version of the DDD Perth 2018 Logo

What’s interesting is that this date coincides with the start of the Noongar season (August-September) of Djilba. More about that later but did you know the Noongar people have six, not four, seasons?

1. Birak (December to January)

2. Bunuru (February to March)

3. Djeran (April to May)

4. Makuru (June to July)

5. Djilba (August to September)

6. Kambarang (October to November)

For DDD Perth this year we are theming the conference around the Noongar nation season of Djilba because it also aligns with a lot of what we are trying to do and achieve with the conference.

Djilba is a transitional time of year, moving from rainy to sunny days… DDD Perth is reflecting on the past and looking to the future of software and those working in the community.

Acacia Wildflowers

Djilba is the season of conception and when the wildflowers start to bloom… DDD Perth is looking for new ideas, even wild ideas, and new perspectives on all aspects of software.

Moongar Pigface Wildflowers

Djilba coincides with the massive explosion of wildflowers in the South West… DDD Perth is the largest and most diverse community software conference in Western Australia.

Interestingly, Djilba is also the time that magpies start swooping to protect their young… DDD Perth is committed to supporting and developing juniors in the software community.

A Magpie Swooping

Finally, we also acknowledge the traditional custodians of the land we are meeting on, the Whadjuk people of the Noongar nation (which, by the way, means “knowledge”).

We wish to acknowledge and respect their continuing culture and contribution they make to the life of this city and region, and pay our respects to the Elders past, present and emerging.

We are also grateful to the South West Aboriginal Land and Sea Council for their help with the theme and sourcing a “Welcome to Country” for this year’s DDD Perth conference.

So swoop on in like a magpie to register for DDD Perth 2018 (if you haven’t already done so) and come along on the day to let the blooming wildflowers (speakers) prepare you for summer.

Hope to see you there amongst the wildflowers!

For more information on Djilba check out: http://www.bom.gov.au/iwk/nyoongar/djilba.shtml http://www.abc.net.au/local/photos/2015/07/27/4281604.htm

And click here for the correct pronunciation of Djilba: http://www.derbalnara.org.au/_literature_147910/djilba.mp3

P.S. If you would like a sweet conference t-shirt with the Djilba logo on it you can purchase them here.


Djilba and DDD Perth 2018 was originally published in DDD Perth on Medium, where people are continuing the conversation by highlighting and responding to this story.


mediawiki urls contain Chinese characters, how to change the urls creating rules for new pages?

Published 16 Jun 2018 by Edwin Sun in Newest questions tagged mediawiki - Stack Overflow.

I am setting up a Chinese site with Mediawiki, that means it's written in Chinese. but mediawiki creates a url containing Chinese characters for each new page, I want its urls written in English letters, instead of Chinese words. Can anyone help and tell me how could I change its default url rules?

See the image below, its url has Chinese, but I don't like this.

enter image description here


new note (near -32.068, 115.752)

Published 16 Jun 2018 by in OpenStreetMap Notes.

Comment

Created 8 days ago
Real estate agent: 'Property Gallery', no 231 south terrace. 2 bike parking racks in front.

Full note

Created 8 days ago
Real estate agent: 'Property Gallery', no 231 south terrace. 2 bike parking racks in front.

new note (near -32.068, 115.752)

Published 16 Jun 2018 by in OpenStreetMap Notes.

Comment

Created 8 days ago
Hairdresser: 'Rock Paper Scissors'. No. 227 South Terrace.

Full note

Created 8 days ago
Hairdresser: 'Rock Paper Scissors'. No. 227 South Terrace.

new note (near -32.068, 115.752)

Published 16 Jun 2018 by in OpenStreetMap Notes.

Comment

Created 8 days ago
Shop: 'Vanilla Gifts', vanillagifs.com.au, no. 229 South Terrace.

Full note

Created 8 days ago
Shop: 'Vanilla Gifts', vanillagifs.com.au, no. 229 South Terrace.

The River Knows

Published 16 Jun 2018 by jenimcmillan in Jeni McMillan.

DSC_0136

The village is a walk through ferns, following a goat track. I heard the goat herder’s wild animal cries at sunrise and the passing sounds of bells, bleats and hoofs sure-footed on stone. But I have no desire to go to the village. Instead I go to the waterfall to wash the city from my body and remember the sweet caress of the sun.


new note (near -31.981, 115.781)

Published 16 Jun 2018 by in OpenStreetMap Notes.

Comment

Created 8 days ago
dome has closed here. is empty place now.

Full note

Created 8 days ago
dome has closed here. is empty place now.

MediaWiki 1.31 and "Error: your composer.lock file is not up to date"

Published 16 Jun 2018 by jww in Newest questions tagged mediawiki - Webmasters Stack Exchange.

We are trying to upgrade from MediaWiki 1.30 to 1.31. We downloaded mediawiki-1.31.0.tar.gz from the MediaWiki site. The tarball was unpacked overtop of the old MediaWiki installation after backing up files. After the unpack we restored the old LocalSettings.php.

We are at Step 6 of the MediaWiki upgrade instructions:

When we run the update script from the mediawiki directory we get:

# php maintenance/update.php

Notice: Undefined index: SERVER_NAME in /var/www/html/w/includes/GlobalFunctions.php on line 1432

Notice: Undefined index: SERVER_NAME in /var/www/html/w/includes/GlobalFunctions.php on line 1432
MediaWiki 1.31.0 Updater

oojs/oojs-ui: 0.23.0 installed, 0.26.4 required.
pear/mail: not installed, 1.4.1 required.
pear/mail_mime: not installed, 1.10.2 required.
pear/mail_mime-decode: not installed, 1.5.5.2 required.
wikimedia/at-ease: not installed, 1.2.0 required.
wikimedia/html-formatter: 1.0.1 installed, 1.0.2 required.
wikimedia/ip-set: 1.1.0 installed, 1.2.0 required.
wikimedia/object-factory: not installed, 1.0.0 required.
wikimedia/php-session-serializer: 1.0.4 installed, 1.0.6 required.
wikimedia/purtle: 1.0.6 installed, 1.0.7 required.
wikimedia/relpath: 2.0.0 installed, 2.1.1 required.
wikimedia/remex-html: 1.0.1 installed, 1.0.3 required.
wikimedia/running-stat: 1.1.0 installed, 1.2.1 required.
wikimedia/utfnormal: 1.1.0 installed, 2.0.0 required.
wikimedia/wrappedstring: 2.2.0 installed, 2.3.0 required.
Error: your composer.lock file is not up to date. Run "composer update --no-dev" to install newer dependencies

Followed by:

# composer update --no-dev
-bash: composer: command not found

I found one post about it on MediaWiki's help forum: update.php says composer.lock not up to date. It was not helpful.

This is a production web server and it is missing some of the dev tools. In fact, it is a CentOS 7 server with PHP 7.0 from a different repo so I am not even sure we can install the right version of composer.

(We had to use the external repo because the native PHP was 5.7 or 5.8, if I recall correctly. PHP 5.7 or 5.8 only supports MediaWiki 1.24 or so, so we had to update to get the latest MediaWiki with security fixes).

None of us are web developers or web server admins by trade. When problems crop up like a failed upgrade then we struggle if the upgrade notes don't include solutions that work for us.

I guess my first question is, is it possible to download a mediawiki-1.31.0 tarball with everything needed for the upgrade? If so, where is it?

If not, then what else can we do to finish this upgrade?


new note (near -32.047, 115.752)

Published 15 Jun 2018 by in OpenStreetMap Notes.

Comment

Created 9 days ago
extra bit of the building here, with Gesha Coffee Co. cafe

Full note

Created 9 days ago
extra bit of the building here, with Gesha Coffee Co. cafe

How to Debug Common.js in Chrome Dev Tools?

Published 15 Jun 2018 by johny why in Newest questions tagged mediawiki - Stack Overflow.

My MediaWiki:Common.js draws, and attaches an event to, a button.

$('#btnSave').html('<button>Send HTTP Post</button>');

How can i debug Common.js without first stepping through MW core code and jquery code?

I tried placing a debugger; flag in Common.js, but chrome didn't seem to notice it.

$(document).on('click', '#btnSave', function() { 
  debugger;
  $.ajax({
    url:"../api.php?action=edit&title=Portal:TagDescriptions&section=2&summary=Hello%20World",
    type:"POST",
    data:{ Text: "Hello, world.",
          token:"c30460d9159a5e2eccca60944ef286405b2393d1%2B%5C" },
    contentType:"application/x-www-form-urlencoded",
    dataType:"json",
    success: function(data) {
        $('#lblDescription').html(data);
    }
  })
}); 

How to automatically add a button at the end of every sub-heading in mediawiki 1.30?

Published 15 Jun 2018 by Vardaan in Newest questions tagged mediawiki - Stack Overflow.

I'm trying to add a button at the end of every subheading automatically in a mediawiki article. For this, I have decided to add the button code on the mediawiki page code(which is seen by ctrl+u). Now, I'm trying to locate the code for a subheading in the mediawiki directory. In which file is the code for sub-heading (or sub-sections) located? Or is there any extension I can use for the same?


pywikibot fails to upload large files

Published 15 Jun 2018 by Daniel Franklin in Newest questions tagged mediawiki - Stack Overflow.

On a Google Compute Engine Server (Linux instance-1 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux), pywikibot fails to upload large files with the following error:

pywikibot.data.api.APIError: missingparam: One of the parameters "filekey", "file" and "url" is required. [help:See https://chinadigitaltimes.net/space/api.php for API usage. Subscribe to the mediawiki-api-announce mailing list at &lt;https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce&gt; for notice of API deprecations and breaking changes.] 1 pages read 0 pages written Script terminated successfully.

I need to upload files up to 2GB. How can I do this with pywikibot?


PHPWeekly June 14th 2018

Published 14 Jun 2018 by in PHP Weekly Archive Feed.

PHPWeekly June 14th 2018
Curated news all about PHP.  Here's the latest edition
PHP Weekly 14th June 2018
Hello to the PHP community, and welcome to PHPweekly.com.

Are you are looking to recruit new staff?
Looking for a high standard of applicant?
Would you like to reach out to the PHP Community to fill your position? 
Where better to advertise your job openings then on phpweekly.com? 

Do you want to entice new talent, or new business, to your business?
How about sponsoring an edition of phpweekly.com?
A stand out advert at the top of our page will catch the eyes of our subscribers.

With our subscriber list nudging 20,000, you could just find exactly who, or what, you are looking for right here.

For more information drop me a line at katie@phpweekly.com.
 
Cheers
Ade and Katie

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.

Articles

Symfony vs Laravel vs Nette - Which PHP Framework Should You Choose
I have been asked this question over hundred times, in person, as a post request. When to use Symfony? How is Laravel better than Symfony? What are Nette killer features compared to Symfony and Laravel? Today, we look on the answer.

20 Useful Laravel Packages Available on CodeCanyon
If you’re not familiar with the Laravel framework, then it’s time to discover what you’ve been missing by checking out these 20 popular Laravel tools and packages to be found at CodeCanyon.

Preface to idbg
We already have several options for debugging code within the PHP ecosystem. XDebug is extremely mature software, and phpdbg has been slowly gaining traction also, if for no other reason than it's very fast to collect code coverage compared to XDebug.

The Complete Guide to WordPress Performance Optimisation
WordPress can thank its simplicity and a low barrier to entry for this pervasiveness. It’s easy to set up, and requires next to no technical knowledge. Hosting for WordPress can be found for as little as a couple of dollars per month, and the basic setup takes just a half hour of clicking. Free themes for WordPress are galore, some with included WYSIWYG page builders.

Email News After GDPR
GDPR took effect last month, and many organisations sent policy updates to your inbox. We took action on our email lists to acquire explicit consent from all subscribers. You can read about other action we took to prepare for GDPR, but this post is all about what we communicate about through the Drupal email list.

Tutorials and Talks

Continuous Delivery with Jenkins and GitHub
If you can set up a project server once, you can set up Jenkins to deploy that project again and again as you develop, maintain, and expand it. In this post we will set up a multi-stage deploy server and the Jenkins jobs we need for continuous delivery. By the end you will know how to set up a server and Jenkins jobs to automatically deploy successfully built branches into each environment.

Understanding Design Patterns - Composite
Allows you to compose objects into tree structures to represent part-whole hierarchies. Composite lets clients treat individual objects and compositions of objects uniformly.

Building a PHP Framework: Part 4 – The Foundation
Part 3 was all about action – without actually taking any or writing any code. This installment will actually see the groundwork for the Analyze framework taking shape.

Serverless Laravel
Last week I introduced Bref as a solution to running PHP serverless. Today let’s try to deploy a Laravel application on AWS lambda using Bref. The code shown in this article is available on GitHub.

How to Count the Number of Lines of Code in a PHP Project
I'm giving a talk soon about Laravel and "the enterprise", and the concept of LOC (lines of code) keeps coming up. It turns out that's actually a much harder number to discover than you might think, so I figured I would write up a few options here.

PHP 7.3: A Look at JSON Error Handling
One of the new features coming to PHP 7.3 is better error handling for json_encode() and json_decode(). The RFC was unanimously accepted by a 23 to 0 vote. Let’s take a look at how we handle JSON errors in <= PHP 7.2, and the new improvements coming in PHP 7.3.

Send Emails in PHP Using Swift Mailer
In this article, we're going to explore the Swift Mailer library, which allows you to send emails from PHP applications. Starting with installation and configuration, we'll go through a real-world example that demonstrates various aspects of sending emails using the Swift Mailer library.

Road to Dependency Injection
I've worked with several code bases that were littered with calls to Zend_Registry::get(), sfContext::getInstance(), etc. to fetch a dependency when needed. I'm a little afraid to mention façades here, but they also belong in this list. The point of this article is not to bash a certain framework (they are all lovely), but to show how to get rid of these "centralised dependency managers" when you need to.

Using Composer with Serverless & OpenWhisk
Every PHP project I write has dependencies on components from Packagist and my Serverless OpenWhisk PHP projects are no different. It turns out that adding Composer dependencies is trivial.

How to Migrate From PHP_CodeSniffer to EasyCodingStandard in 7 Steps
Last year, I helped Shopsys Coding Standards and LMC PHP Coding Standard to migrate from PHP_CodeSniffer to EasyCodingStandard. There are a few simple A → B changes, but one has to know about them or will get stuck. Do you also use PHP_CodeSniffer and give it EasyCodingStandard a try? Today we look at how to migrate step by step.
 
Our PHP cheat sheet aims to help anyone trying to get proficient in or improve their knowledge of PHP. The programming language is among the most popular in web development. It lies at the heart of WordPress, the world’s most popular content management system, and also forms the base of other platforms like Joomla and Drupal.

News and Announcements

PHP 7.3.0 Alpha 1 Released
PHP team is glad to announce the release of the first PHP 7.3.0 version, PHP 7.3.0 Alpha 1. This starts the PHP 7.3 release cycle, the rough outline of which is specified in the PHP Wiki. For source downloads of PHP 7.3.0 Alpha 1 please visit the download page. Please carefully test this version and report any issues found in the bug reporting system.

CoderCruise - August 30-September 3rd 2018, Ft. Lauderdale, FL
Tired of the usual web technology conference scene? Want a more inclusive experience that lets you get to know your fellow attendees and make connections? Well, CoderCruise was designed to be just this. It's a polyglot developer conference on a cruise ship! This year we will be taking a 5-day, 4-night cruise out of Ft. Lauderdale, FL that includes stops at Half Moon Cay and Nassau. Tickets are on sale now.

WavePHP Conference - September 19th-21st 2018, San Diego
WavePHP Conference is bringing the wonderful PHP community to the Southwest United States. Designed to be a conference for both professionals and hobbyists alike. Held in beautiful southern California's San Diego County the area has ideal weather and tons of activities. Early Bird Tickets are on sale now.

Pan-Asian PHP Conference - September 26-29th 2018, Singapore
The third pan-Asian PHP conference will take place in September 2018 in Singapore - the Garden City of the East! This is a single track, 2 days Conference, followed by a day of tutorials on 29th September 2018. Come and meet with the fastest growing PHP communities in Asia. More than 300 attendees are expected in this single track conference, with Rasmus Lerdorf and Sebastian Bergmann already confirmed as speakers. Super Early Bird Tickets are on sale now.

Nomad PHP US - July 19th 2018 20:00 CDT
Better and Faster: TDD-ing a Ride-Hailing Application w/ PHPUnit, Symfony and Doctrine
Presented by Chris Holland. Imagine building an application without having to mess with a Web Browser, a REST client or a MySQL client. What if you could build full-blown functionality with realistic data operations within the comfort of a Unit Test Harness? What if this meant shipping code earlier and more frequently than you ever have before? Building upon concepts outlined in this talk: http://bit.ly/tdd-talk-2 , and leveraging an evolving “Kata” for building a “Ride-Hailing Application”, this exercise will walk thru a rapid-development example from a “clean-slate” Symfony3 project, with just enough bootstrapping to enable Test-Driven Development with PHPUnit & Doctrine.

Nomad PHP EU - July 19th 2018 20:00 CEST
The PHP Developer Stack for Building Chatbots. Presented by Christoph Rumpel. Facebook Messenger, WhatsApp, WeChat, Skype, and Telegram have more than three billion active users combined! This led messenger platforms to open their doors for application development on their chats and started the rise of these applications. We all know them today as chatbots. Chatbots are much more than a hype. They change the way we communicate with companies and are bringing customer support and personalisation to a new level. But what does the technology behind look like? In this talk, I will show you all the tools it takes to build a chatbot in PHP. You will see what it’s like developing and testing chatbots for multiple platforms and how NLP (Natural Language Processing) services can help you to understand the user.

Podcasts

Voices of the ElePHPant - Interview with Mike Stowe
Cal Evans Interviews Mike Stowe of Ring Central while at Longhorn PHP 2018.
 
Laravel Podcast Episode 13 - Interview: Adam Wathan, Co-Creator of Tailwind CSS and Laravel Educator
An interview with Adam Wathan, co-creator of the Tailwind CSS library and author and video producer.
 
Full Stack Radio Podcast Episode 90: David Hemphill - Using JSX with Vue.js
In this episode, Adam talks to David Hemphill about using JSX instead of templates in Vue.js, and why you might want to give it a try.

MageTalk Magento Podcast #172 – “The Magento Community Productivity Quotient”
"They thought I did it in Node.js and I did it in jQuery and they emoji'd me to death" Kalen reacts to the livestream and Phillip talks about the SD Accelerator, Magento 2.0 End of Life, long-term support, the coming onslaught of B2B into our Magento builds.

PHP Roundtable Podcast Episode 73: PHP Static Analysis
Static analysis is a fancy word to describe a tool that looks at our code and gives us helpful hints on how to improve it. We'll be discussing what static analysers do, which tools the PHP community has access to, and how we can incorporate the tools into our daily development flow.

PHP Ugly Podcast #108: This is American - Privacy Edition
This month the team discusses privacy and the lack of it, in our current government.

PHP Web Development Podcast Ep #2 - Dan’s Journey to Becoming a Lead Developer
This week I have the pleasure of speaking to Dan Blows ,Currently a tech lead, he has done some really cool stuff, he's in the top 3% on stack overflow, in the top 10% of PHP developers in Europe, speaking at meet ups , spoke in the European conference in front of 100's of people, training and mentoring junior developers and many more.

Reading and Viewing

Announcing Laravel Events
Laravel Events is a brand new community site that I created with the goal of helping keep the community informed of conferences, meetups, and other events. My goal is to pull over the events for the Laravel News homepage and also integrate with the weekly newsletter. If you run a meetup or local event go add your future meetings.

php[architect] Magazine June 2018 - Command and Control
Staying on top of what your code is doing is imperative if you want to keep your sanity. At the start, this means defining how and what your software does. Later, you have to track its evolution as you add features and fix bugs. In this issue, our contributors share techniques and tools to help you do so.

Startup Tips on How to Stay Motivated with Your Business Idea from 9 Key Influencers
Motivation is the Achilles heel for many entrepreneurs and startup owners. You’d assume that the more innovative an entrepreneur is, the more motivated he is to pursue his business or startup idea to fruition, right?

Coding Blocks for WordPress Gutenberg
WordPress is working on a dramatic redesign of their editor. It's called Gutenberg, and it aims to provide a true WYSIWYG experience by breaking up pieces of posts and pages into individual blocks of content. Of course, this brings some big changes for plugin and theme developers!

The Month in WordPress: May 2018
This month saw two significant milestones in the WordPress community — the 15th anniversary of the project, and GDPR-related privacy tools coming to WordPress Core. Read on to find out more about this and everything else that happened in the WordPress community in May.

Jobs





Do you have a position that you would like to fill? PHP Weekly is ideal for targeting developers and the cost is only $50/week for an advert.  Please let me know if you are interested by emailing me at katie@phpweekly.com

Interesting Projects, Tools and Libraries

FOSS Project Spotlight: The Codelobster IDE--a Free PHP, HTML, CSS and JavaScript Editor
The Codelobster free web language editor has been available for quite some time and has attracted many fans. It allows you to edit PHP, HTML, CSS and JavaScript files, and it highlights the syntax and provides hints for tags, functions and their parameters. This editor deals with files that contain mixed content easily as well.
 
nolovia
Nolovia is an ad/malware blocking configuration file generator for bind, NSD, and other DNS resolvers.

znframework
The basic principle of ZN Framework is to let users write simple and readable codes. Because of this principle, our libraries are built by using both dynamic and static access methods with a Powerful Autoloading Architecture.

datatables-bundle
This bundle provides convenient integration of the popular DataTables jQuery library for realtime Ajax tables in your Symfony 3.3+ or 4.0+ application.

pestle
A collection of command line scripts for Magento 2 code generation, and a PHP module system for organising command line scripts.

luthier-ci
An awesome set of core improvements for CodeIgniter 3 that makes the development of APIs (and websites in general) more easy!

boinc
Open-source software for volunteer computing and grid computing.

php-swagger-test
A set of tools for test your REST calls based on the swagger documentation using PHPUnit.

inphinit
PHP web application using Inphinit framework.

phpcompatibility
This is a set of sniffs for PHP CodeSniffer that checks for PHP version compatibility. It will allow you to analyse your code for compatibility with higher and lower versions of PHP.

intisp
IntISP is a hosting control panel that is designed to be light and fast. It uses only PHP and HTML with a few shell.

php-time-ago
Simple module, that displays the date in a "time ago" format.

generatedhydrator
A library about high performance transition of data from arrays to objects and from objects to arrays.

redismock
PHP 7.1 library providing a Redis PHP mock for your tests.

tracy
The addictive tool to ease debugging PHP code for cool developers. Friendly design, logging, profiler, advanced features like debugging AJAX calls or CLI support. You will love it.

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.
 

So, how did you like this issue?

Like us on FacebookFollow us on Twitter
We are still trying to grow our list. If you find PHP Weekly useful please tweet about us! Thanks.
Also, if you have a site or blog related to PHP then please link through to our site.

unsubscribe from this list | update subscription preferences 
 
Copyright © 2018 PHP Weekly, All rights reserved.
Email Marketing Powered by MailChimp

Funk ‘n Cider Music Finale

Published 14 Jun 2018 by Dave Robertson in Dave Robertson.

Share


Funk ‘n Cider Music Finale

Published 14 Jun 2018 by Dave Robertson in Dave Robertson.

Share


Can't upload files in Mediawiki

Published 13 Jun 2018 by Behiry in Newest questions tagged mediawiki - Stack Overflow.

After I've updated to Mediawiki 1.29.0, I can't upload any file. When I upload a file, I get this error message:

Could not open lock file for "mwstore://local-backend/local-public/b/b0/1.jpg".

I've chmod the folder images and sub-directories to 755, and verified that the folder images/b/b0 is found and writable.

I'm running on Centos 7.5 and PHP 5.6


W3C Strategic Highlights for Spring 2018 and Advisory Committee Meeting

Published 13 Jun 2018 by Jeff Jaffe in W3C Blog.

AC 2018 logo W3C held its annual meeting mid-May in Berlin, Germany. It was a very engaging meeting that many found excellent, with a good range of sessions, a clear structure and good audience participation. The W3C Advisory Board had worked very closely with me, feeling the pulse of the membership, understanding what topics needed conversation. The planning started as early as December, and a great deal of work went into creating the program.

I felt that this was a meeting where the Advisory Committee truly “advised”. There was much time scheduled for open discussion. This would have been boring if noone came to the microphone. But people came! With great input. Differences of opinion. Expressed with passion and respect. Striving for consensus on critical governance issues facing the consortium.

The agenda focused on governance, complemented with demonstrations of new technology and general web issues of the day. Here are a few highlights I would like to share publicly:

Our partnership with WHATWG

W3C management shared a proposal for a joint workmode with WHATWG on HTML, that was developed with the WHATWG Steering Group, and seeks to organize a partnership that has had ebbs and flows over a period of more than a decade. A set of terms were developed, but W3C management had not had an opportunity to socialize it, or get feedback, reaction, and support. A large number of people came to the microphone with diverse viewpoints. There was general support for the overall direction, but quite a few important suggestions about how to make the proposal even better were made and need to be further developed before broader socialization.

Diversity

A panel on diversity focused on progress we have made and how much more is required. A W3C Member came to the microphone to propose and agree to start funding “diversity tickets” for TPAC attendance. We discussed actions that W3C should take to promote diversity: including publishing our diversity statistics, encourage AC reps to nominate a more diverse set of people to run for Advisory Board and Technical Architecture Group Elections. This was further discussed at the AB May face-to-face meeting right after the AC Meeting and the team will help set this up.

I will soon write again about this.

Updates (CEO, TAG, demos, process)

As with any AC meeting part of the objective is to share information. I highlighted breakthrough success – in accessibility, payments, authentication, and fonts – as well as pointing to innovative directions in general. Dan Appelquist focused on the TAG’s continued efforts to address issues coming from many directions; particularly our Working Groups. Natasha Rooney described “W3C Process for Busy People” that intends to make our process more easily understandable.

Propaganda, misinformation and fake news

We had a very interactive panel about propaganda, misinformation and fake news, and the role of standards. Partly this was to inform and engage the AC with a topic currently on our periphery; partly this was to discuss work being incubated in our Community Groups which could become standards in the future; and partly it was to engage leading thinkers in our host country (Germany) in a topic of public interest.

Technical controversy

There has been some controversy in our community since the TAG published its finding on distributed and syndicated content, so we were treated to a dialog between the editor of the document and a leader in the AMP development organization to expose some different viewpoints on this question.

Invited Expert identity program

In conversations at the AC meeting as well as at the AB meeting, there was a feeling that we needed to do more for Invited Experts. At the Advisory Board meeting, they recommended that we raise TPAC registration fees for all registrants by 5% (with an opt-out possibility) to create a fund which would allow IEs with financial challenges to be exempted from registration fees. Additionally, the AB recommended that we create an Invited Expert Identity program, which would consist of a mailing list where to share experiences with each other – to become a community; and inviting this community to send an observer to AC meetings.

I will soon write again about this.

During the meeting we released the Spring 2018 W3C Strategic Highlights which gives an overview of recent highlights and work of consolidation, optimization, enhancement of the existing landscape, innovation, incubation, research. and the Road-map for the Web.


Semantic Media Wiki action=purge alternatives and automation

Published 13 Jun 2018 by Barnes in Newest questions tagged mediawiki - Stack Overflow.

I'm having an issue where pages with #ask queries aren't updating after updating content on other pages. The only way to get them to update seems to be using action=purge. Is there a maintenance script that will preform this across all pages? Which variables can I use to reduce the amount of time a page is cached? I'm having trouble determining which cache's I need to adjust.


Book review: Seven Types of Atheism

Published 13 Jun 2018 by in New Humanist Articles and Posts.

John Gray's latest book goes to great pains to underline the debt that non-religious intellectuals owe religion.

Deploying a Multi-region Docker Registry to Improve Performance

Published 12 Jun 2018 by Jeff Zellner in The DigitalOcean Blog.

Deploying a Multi-region Docker Registry to Improve Performance

Over the past several years, containers in general, and Docker specifically, have become quite prevalent across industry. Containerization offers isolated and reproducible build and runtime environments in a simple and developer-friendly form. They make the entire software development process run a bit smoother, from initial development to deploying services in production. Orchestration frameworks like Kubernetes and Mesos offer robust abstractions of service components, which simplifies deployment and management.

Like many other tech companies, DigitalOcean uses containers internally to run production services. Quite a few of our services run inside Kubernetes, and a large slice of those run on an internal platform that we've built to abstract away some of the pain points for developers new to Kubernetes. We also use containers for CI/CD in our build systems, and locally for development. In this post, I’ll describe how we redesigned our Docker registry architecture for better performance. (You can find out more about how DigitalOcean used both containers and Kubernetes in a talk by Joonas Bergius, and more about our internal platform, DOCC, in this talk by Mac Browning.)

Simple beginnings and growing pains

Initially, to host our private Docker images, we set up a single server running the official Docker registry, backed by object storage. This is a common, simple pattern for private registries, and it worked well early on. By relying on a consistent object store for backing storage, the registry itself doesn’t have to worry about consistency. However, with a single registry instance, there are still performance and availability bottlenecks, as well as a dependency on being able to reach the region running the registry.

As our use of containers grew, we started to experience general performance issues such as slow or failing image pushes. A simple solution for this would be to increase the number of registry instances running, but we’d still have a dependency on the single region being available and reachable from every server.

Additionally, the default behavior of the official Docker registry is to serve the actual image data via a redirect to the backing store. This means a request from a client arrives at the registry server, which returns a HTTP redirect to object storage (or whatever remote backend you have configured the registry to use). One unique issue that we encountered was a large deployment of large Docker images (~10GB) spiking bandwidth to our storage backend. Hundreds of clients requested a new, large image at the same time, saturating our connection to storage from our data center. Running multiple instances of the registry wouldn’t solve this issue—all the data would still come from the backing store.

Design goals

We decided it was time to to overhaul our Docker registry architecture, with a few primary goals in mind:

Architecture choices

We operate relatively large Kubernetes clusters in every DigitalOcean region, so using the fundamental building blocks that Kubernetes and our customizations offer was a logical choice. Kubernetes provided us with great primitives like scaling deployments and simple rolling deploys. Additionally, we have lots of internal tooling for running, monitoring, and managing services running inside Kubernetes.

For caching, we decided to take advantage of the Docker registry’s ability to disable redirects. Disabling redirection causes the registry server to retrieve image data, and then send it directly to the client, instead of redirecting the request to the backend store. This adds a bit of latency to the initial response, but enables us to put a caching proxy like Squid in front of the registry and serve cached data without transiting to the backing store on subsequent requests.

At this point, we had a good idea of how to run multiple caching registries in every region, but we still needed a way to direct clients to request Docker images from the registry in their region, instead of a single global one. To accomplish this, we created a new DNS zone that was not shared between regions, so that clients in each region could resolve the DNS address of our registry to the local region's registry deployment, instead of to a single registry located in a different region.

Implementation details

The registry configuration we ended up using was rather standard, using a storage backend configured with access key and secret key. The one important bit, as previously mentioned was disabling redirect:

storage:  
  redirect:
    disable: true

For caching image data locally with the registry, we chose to use Squid. Each instance of the registry would be deployed with its own Squid instance, with its own cache storage. This approach was simple to set up and configure, but does have drawbacks: notably, that each instance of the registry has its own independent cache. This means that in a deployment of multiple instances, multiple identical requests directed to different backing instances could result in several cache misses, one for each instance of the registry and cache. There's room for future improvement here, setting up a larger, shared cache that all registry instances in a region sit behind. Any local caching at all was a big improvement over our original setup, so it was an okay tradeoff to make in our initial work.

To configure Squid, we wrote a simple configuration to listen for HTTPS connections and to send all cache misses to the local registry:

https_port 443 accel defaultsite=dockerregistry no-vhost cert=cert.pem key=key.pem  
...
cache_peer 127.0.0.1 parent 5000 0 no-query originserver no-digest forceddomain=dockerregistry name=upstream login=PASSTHRU ssl  
acl site dstdomain dockerregistry  
http_access allow site  
cache_peer_access upstream allow site  
cache allow site  

Once we had written the registry and Squid configuration, we combined the two pieces of software to run together in a Kubernetes deployment. Each pod would run an instance of the registry and an instance of Squid, with its own temporary disk storage. Deploying this across our regional Kubernetes clusters was straightforward.

apiVersion: extensions/v1beta1  
kind: Deployment  
metadata:  
  name: registry
spec:  
  replicas: 3
  template:
    spec:
      volumes:
        - name: registry-config
          configMap:
            name: registry-config
        - name: squid-config
          configMap:
            name: squid-config
        - name: cache
          emptyDir: {}
      containers:
        - name: registry
          image: registry:2.6.2
          volumeMounts:
            - name: registry-config
              mountPath: /etc/docker/registry/config.yml
              subPath: config.yml
        - name: squid
          image: squid:3.5.12
          ports:
            - containerPort: 443
          volumeMounts:
            - name: squid-config
              mountPath: /etc/squid/squid.conf
              subPath: squid.conf
            - name: cache
              mountPath: /cache

The last bit of remaining work was enabling ingress to our new registry, which we did using our existing HAProxy ingress controllers. We terminate TLS with Squid, so HAProxy is only responsible for forwarding TCP traffic to our deployment.

apiVersion: extensions/v1beta1  
kind: Ingress  
metadata:  
  name: docker
spec:  
  rules:
    - host: dockerregistry
      http:
        paths:
          - path: /
            backend:
              serviceName: docker
              servicePort: 443
  tls:
    - hosts:
        - dockerregistry
      secretName: not_needed

Conclusion

In conclusion, this registry architecture has been working well, providing much quicker pulls and pushes across all of our data centers. With this setup, we now have Docker registries running in all of our regions, and no region depends on reaching another region to serve data. Each registry instance is now backed by a Squid caching proxy, allowing us to keep many requests for the same data entirely in cache, and entirely local to the region. This has enabled larger deploys and much higher pull performance.

Future improvements will be made around metrics instrumentation and monitoring. While we currently compute metrics by scraping the registry logs, we're looking forward to the Docker registry including Prometheus metrics natively. Additionally, creating a shared regional cache for our registry deployments should provide a nice performance boost and reduce the number of cache misses we see in operation.

Jeff Zellner is a Senior Software Engineer on the Delivery team, where he works on providing infrastructure and automation around Kubernetes to the DigitalOcean engineering organization at large. He's a long-time remote worker, startup-o-phile, and incredibly good skier.


Episode 10: Ben Fletcher

Published 12 Jun 2018 by Yaron Koren in Between the Brackets: a MediaWiki Podcast.

Ben Fletcher is a systems architect at the Information Systems and Services (ISS) cluster for the UK Ministry of Defence (MoD). He helped to select MediaWiki for use at the MoD in 2016, and currently does MediaWiki-related work full-time.

Links for some of the topics discussed:


How to center a text in a blank space under an image in a fixed div?

Published 11 Jun 2018 by Thibd in Newest questions tagged mediawiki - Stack Overflow.

I'm trying to make an homepage with a blog style layout on mediawiki with DynamicPageList 3 and some HTML/CSS.

Schema

DynamicPageList generate a grid, image and blog title with a link. No "a" and "img" tags. It is wiki code. But I can encapsulate it in divs and spans with css.

Each blog image+title is in a fixed height and width div of 322x322px with whitegrey borders.

At the top of the div we have the image with 320px width. Height is unknown.

Under the image there is blank space.

The blank space height is unknown.

In the blank space there is the blog post title.

This blog post has to be vertically centered in the blank space. It can spread 1, 2 or 3 lines.

I have an idea with a table but I'd rather ask to experts for this one. What would be you proposal to get make this with some simple HTML/CSS?

Thanks !

{{#dpl:
  |namespace= 
  |format   = ,¶¶
  <div style="width:322px;height:322px;border:solid 1px lightgrey;margin-bottom:30px">
        [[Fichier:%PAGE%.jpg|320px|link=%PAGE%]]
        <br/>
        <div style="font-size:150%;padding:12px;border-top:solid 1px lightgrey">
              [[%PAGE%]]
        </div>
  </div>,
  |columns=3
  |rowcolformat=width=100%
  }}

Installing MediaWiki and Configuring It on a Hosted Server

Published 11 Jun 2018 by N Vora in Newest questions tagged mediawiki - Stack Overflow.

I am trying to create a wiki on a hosted server using MediaWiki. I have downloaded and extracted the files from MediaWiki into my server space. I also have Xampp, which comes with PHP 7.2, and I have activated Apache and MySQL using the Xampp Control Panel. We have followed the instructions listed on https://www.mediawiki.org/wiki/Manual:Installing_MediaWiki and am trying to run the installation script as instructed by https://www.mediawiki.org/wiki/Manual:Config_script . However, the installation script is not running, and when I try to open the link that my wiki is supposed to be hosted on, I am presented with this this. What should I do to be able to run the installation script? Thank you very much!


Do Not Track and the GDPR

Published 11 Jun 2018 by Mike O'Neill in W3C Blog.

The Tracking Protection Working Group (TPWG) has been engaged with issues of online data protection, privacy and tracking since 2011. Its Tracking Protection Expression draft recommendation (TPE), substantially completed in 2013, first became a Candidate Recommendation (CR) in August 2015.The main feature of the TPE, the DNT request header, is now implemented by all the major browsers via a general preference setting, with the JavaScript API for registering a site-specific preference implemented by browser extensions, as well as Microsoft’s Internet Explorer and Edge browsers.

The DNT header indicates settings that a user has made within their browser, either directly or mediated by script on a page, to indicate their preference of agreeing or declining to be tracked. Once a “general preference” is configured, browsers add the DNT header to all HTTP requests, including requests to be sent to embedded sub-resources. The header value can either start with “1”, meaning “Do Not Track”, or “0” signifying “this user has agreed to tracking for the purposes explained”. There is a defined JavaScript API letting a browsing context change the DNT setting for its own domain origin, or for the domain origin of its embedded sub-resources – so called “site-specific” consent.

GDPR & ePrivacy

The General Data Protection Regulation (EU) 2016/679, which has just come into force, is important for web privacy because it clarifies what makes for valid user consent in more detail than the Data Protection Directive that preceded it. The existing ePrivacy Directive (introduced in 2002, amended 2009) requires prior user consent for access to storage in browsers, other than for a restricted set of exempted purposes, and now for consent to be valid it must meet its description in the GDPR. Consent must not only be freely given, specific, informed and unambiguous, it must be indicated by the user’s affirmative act – it is no longer enough to display “implied consent” notices, pre-selected checkboxes, or cookie walls, and it must be as easy for users to withdraw consent as to give it.

The GDPR also introduces much larger fines, making data and privacy protection a board level topic.

There is also a new ePrivacy Regulation (ePR) in the works, aimed at replacing the ePrivacy Directive. Although the European Parliament completed its deliberations last year, and voted through its own draft text, the European Council has dragged its feet somewhat. Even so, the important trilogue discussions between the European Parliament, Council and Commission, aimed at finalising the text, are expected to start soon. DNT

DNT

DNT is a highly efficient way to convey user consent to web servers because the header is always present in every request. A JavaScript global property also allows a browsing context, say for an iframe tag or a first-party page, to immediately determine the current setting. Although HTTP cookies can of course also encode a consent signal, there is no way to selectively include them in sub-resource requests, as cookies once stored will always be sent to their respective domain origins (i.e. to access third-party resources on any first-party site), and moreover there is no simple or efficient API a browsing context can use to set cookies for its embedded sub-resource domains.

The TPE also defines a JSON resource, called the Tracking Status Resource (TSR), to be made available by domains that implement DNT, located at a well-known path (/.well-known/dnt/). This resource enables domains to declare their identity, policy for tracking, and other important items, important so that browsers can show users the servers being enlisted to supply content for a page, to support the now legally required transparency. European data protection and privacy law requires that users be able to determine who they may be tracked by, for what purpose, and give their informed and specific consent if they freely choose to.

The Tracking Protection Working Group was chartered in 2017 to demonstrate the viability of TPE to address the requirements for managing cookie and tracking consent that satisfies the requirements of EU privacy legislation”. This resulted in a new CR for the TPE in October 2017 which included improvements for the Javascript API and other elements.

Later further changes in the draft were put forward to meet the requirements for the European Parliament’s agreed text for the EU’s ePrivacy Regulation, and to allow for the communications of agreed purposes requested by the AdTech or “industry side” group members. The API was extended so that a site-specific signal was available to indicate the required right-to-object for permitted “web audience measurement”(A8.1d in the European Parliament’s ePR text), i.e. to send a DNT:1 header to certain domains even if the general preference had not been set, and to define an extension to the header so that a purpose descriptor could be sent when consent had been given, i.e. an extension to the DNT:0 header. A new “purposes” property for the TSR was defined whereby a server can indicate, via a dynamically created web page, the purposes the user has agreed to by decoding the new extension field in the incoming DNT header.

Implementation

Now that the GDPR is in force, and the ePrivacy regulation final text hopefully soon to be agreed, the fact that a CR exists for efficient signalling of user consent may encourage browser providers to implement or update their DNT implementations.

If they do, DNT would offer a much better signalling method for user consent than techniques based on HTTP cookies. Third-party cookies as presently constituted cannot convey site-specific consent1, and it is unlikely that users, once they have been made aware of their right to give their prior consent, will agree if their only option is to be tracked across the entire web. Although the IAB EU’s recently introduced Consent and Transparency Framework (CTF) allows for consent to be recorded in first-party cookies, and so site-specifically, there is no mechanism to persist it within a sub-resource context without using a third-party cookie (or other domain specific storage), which is then incapable of recording the site-specific context. Without persistence the efficiency of indicating consent to third-parties becomes a problem.

In DNT the browser absolutely determines which domain receives the consent signal, within the parameters of the Same Origin Policy and, while it does not need the elaborate encoding of party identity, with its attendant fingerprinting risks, underlying the CTF’s “daisybit” identifier, this can still be incorporated in a consent-based protocol where the “daisybit” is only sent to the parties the user has agreed to. This could give the online advertising industry, the publishers that rely on it, and web users a win-win outcome – good for data protection, privacy and commerce.

Extensions

The architecture of the DNT protocols has been designed to be extensible, and there have been discussions in the TPWG about additions that could help publishers and advertisers improve efficiency by extending the protocols for consent-contingent targeting and privacy-oriented audience measurement. If representatives from publishing and advertising wish to engage with that, the TPE is a great base to build on. We have had a charter extension till September but if new members with a commitment to engage were to appear, we should be able to extend it further.


Mike O’Neill is an Invited Expert in the Tracking Protection WG


What happened after Russia decriminalised domestic abuse

Published 11 Jun 2018 by in New Humanist Articles and Posts.

Despite a chronic domestic violence problem, a new law has made punishing abusers even harder. Where does Russia go from here?

Running WikiMedia lua script or expanding API module output

Published 9 Jun 2018 by Andrew in Newest questions tagged mediawiki - Stack Overflow.

I have two specific questions and I hope someone can answer either of them:

  1. Is there a way to provide MediaWiki's mw library to a stand-alone lua script?
  2. Is there an API command or property that exposes the output of a dynamic module?

Background: I am trying to figure out how to access the output of a wiktionary module (in this case, pron-th). This is a module that can be dynamically inserted by editors to show transliteration (pronunciation) of Thai words. For example, whenever an editor has added this line:

 {{th-pron|ไคฺร่}}

...the server will run the Lua script documented found here and outputs a table showing the various transliterations (example). However, this output is specifically excluded when doing API requests (example) and I cannot find an endpoint that includes this data. And running the lua script directly fails because it is missing several imports, such as mw.ustring, mw.text, etc., which I believe are defined in a PHP include higher up their software stack. I have significant PHP experience but none with Lua, so I am sort of at a loss here.

Short of calling up each page directly and scraping the data, I can't think of a way to do this.


localhost/mediawiki-1.30.0/ in browser returns run(); in Firefox and file content in Chromium

Published 9 Jun 2018 by Tommy Pollák in Newest questions tagged mediawiki - Ask Ubuntu.

After updating from Ubuntu 17.10 to 18.04 my Wiki stopped working.

My system looks like:

~$ ls /var/www/html/mediawiki-1.30.0/
api.php                     img_auth.php             phpcs.xml
autoload.php                includes                 profileinfo.php
cache                       index.php                README
CODE_OF_CONDUCT.md          INSTALL                  RELEASE-NOTES-1.30
composer.json               jsduck.json              resources
composer.local.json-sample  languages                serialized
COPYING                     load.php                 skins
CREDITS                     LocalSettings.php        StartProfiler.sample
docs                        LocalSettings.php~       tests
extensions                  maintenance              thumb_handler.php
FAQ                         mediawiki-1.30.0         thumb.php
Gruntfile.js                mediawiki-1.30.0.tar.gz  UPGRADE
HISTORY                     mw-config                vendor
images                      opensearch_desc.php

and directing Firefox to http://localhost/mediawiki-1.30.0/ makes it display run(); whereas Chromium displays:

<?php
/**
 * This is the main web entry point for MediaWiki.
 *
 * If you are reading this in your web browser, your server is probably
 * not configured correctly to run PHP applications!
 *
 * See the README, INSTALL, and UPGRADE files for basic setup instructions
 * and pointers to the online documentation.
 *
 * https://www.mediawiki.org/wiki/Special:MyLanguage/MediaWiki
 *
 * ----------
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation; either version 2 of the License, or
 * (at your option) any later version.
 *
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License along
 * with this program; if not, write to the Free Software Foundation, Inc.,
 * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
 * http://www.gnu.org/copyleft/gpl.html
 *
 * @file
 */

// Bail on old versions of PHP, or if composer has not been run yet to install
// dependencies. Using dirname( __FILE__ ) here because __DIR__ is PHP5.3+.
// @codingStandardsIgnoreStart MediaWiki.Usage.DirUsage.FunctionFound
require_once dirname( __FILE__ ) . '/includes/PHPVersionCheck.php';
// @codingStandardsIgnoreEnd
wfEntryPointCheck( 'index.php' );

require __DIR__ . '/includes/WebStart.php';

$mediaWiki = new MediaWiki();
$mediaWiki->run();

I had this happen before and I got an answer that solved the problem. However, I can not find the answer any longer, thus I must ask the question again.

I think the answer was a configuration item but which? Configuration of apache2, php7.2 or mediawiki?


Where does the MediaWiki software store the database name and password?

Published 9 Jun 2018 by Dale in Newest questions tagged mediawiki - Stack Overflow.

Where does the MediaWiki software store the database name and password?

I've checked the database layout and I couldn't find any information about it.


How to remove categories from the all categories page in MediaWiki?

Published 9 Jun 2018 by Mark in Newest questions tagged mediawiki - Stack Overflow.

I have a MediaWiki website and I was experimenting with categories. When you delete a page it is not shown at the All Pages as desired. However when I delete a category it remains in the All Categories page. So my question is how can I remove categories that have 0-items?


Customizing Wikibase config in the docker-compose example

Published 9 Jun 2018 by addshore in Addshore.

Just over a month ago I setup the Wikibase registry project on Wikimedia Cloud VPS using the docker-compose example provided by Wikibase docker images. The Wikibase registry is the first Wikibase install that I control that uses the Wikibase docker images, so I’ll be using it as an example showing how the docker images can be manipulated to configure MediaWiki, Wikibase, and load custom extensions and skins.

The example docker-compose file at the time of writing this post can be found at https://github.com/wmde/wikibase-docker/blob/5919016eac16c5f0aefc448240fdf6a09bb56bec/docker-compose.yml

Since the last blog post new wikibase image tags have been created (the ‘bundle’ tags) that include some extensions you might want to enable, as well a quickstatements image for the quickstatements service used on Wikidata written by Magnus Manske.

Reloading currently running services

Before we learn how to change configuration and alter the default containers we need to know how to reload them to pull in the changes.

To reload a single service from the docker compose example you can use the following command example from the directory of your docker-compose file, which will create a new container for the wikibase service:

docker-compose up --no-deps -d wikibase

Using volume mounts

Changing basic configuration

Everyone using the wikibase docker images will likely want to change some configuration, be it a logo, website description or a change of language.

For a few configuration settings the ENV vars provided as part of the image can be used, although these generally only cover connecting services together such as DB_SERVER, DB_USER, DB_PASS and DB_NAME. Other ENV vars exist and are documented by the image README file.

For everything else you’ll want to change LocalSettings.php directly. Docker and docker-compose allow you to override files in an image while running a container using volumes.

Example volume mount

The docker-compose example file already contains examples of volume use by mounting persistent volumes into the running containers. For the wikibase image this can be seen in the docker-compose file with the below snippet which mounts a persistent docker volume called ‘mediawiki-images-data’ to /var/www/html/images within the container.

volumes:
      - mediawiki-images-data:/var/www/html/images

If we wanted to have the images directory be mounted from our host disk rather than using a docker volume we could instead change this to the snippet below:

volumes:
      - ./images:/var/www/html/images

Note: If you already had images saved in the ‘mediawiki-images-data’ volume this would not copy them to the local images directory. The directory would start as an empty directory and you would have to copy the files from the docker volume. You will also have to make sure that the correct permissions have been set to allow the user running in the wikibase container to write to the local directory.

Volume mount LocalSettings

The wikibase docker image README file (which in its current version can be found here) provides various files and directories of interest that a user might want to override. These include 2 locations for LocalSettings that may be of interest:

The LocalSettings template continues to get passed through the envsubst command in the container entrypoint in a command similar to the below (with DOLLAR being substituted to a real $):

export DOLLAR='$'
envsubst < /LocalSettings.php.template > /var/www/html/LocalSettings.php

Using the same volume mounting technique described in the section above we can mount local files over these image provided files:

volumes:
      - ./images:/var/www/html/images
      - ./LocalSettings.php.template:/LocalSettings.php.template

The default LocalSettings.php.template can then be provided on your host and modified, for example a logo is added in the configuration below, which can be seen on this github gist:

Using this method all you need to know if the correct MediaWiki or Wikibase configuration option that you wish to change and follow the regular documentation.

Shoehorning in extensions & skins

The volume mounting method can be used to rather ungracefully shoe horn extensions into the image container.

Extensions can be cloned onto disk:

user@wbregistry-01:/srv/wbrdc# git clone https://github.com/wikimedia/mediawiki-extensions-Nuke.git extensions/Nuke
user@wbregistry-01:/srv/wbrdc# git clone https://github.com/wikimedia/mediawiki-extensions-ConfirmEdit.git extensions/ConfirmEdit

Directories can mounted to locations in the containers in the docker-compose file:

volumes:
      - mediawiki-images-data:/var/www/html/images
      - ./LocalSettings.php:/var/www/html/LocalSettings.php:ro
      - ./Nuke:/var/www/html/extensions/Nuke
      - ./ConfirmEdit:/var/www/html/extensions/ConfirmEdit

And configuration added to our LocalSettings (here using the LocalSettings file directly rather than the template):

wfLoadExtension( 'Nuke' );

wfLoadExtensions([ 'ConfirmEdit', 'ConfirmEdit/QuestyCaptcha' ]);
$wgCaptchaQuestions = [
        'Question 1' => 'Response 1',
        'Question 2' => 'Repsonse 2',
];
$wgCaptchaTriggers['create'] = true;

With a quick service reload both extensions should be visible on the MediaWiki Special:Version page.

Custom Dockerfile & image

For a more permanent solution you probably want to create your own Dockerfile and docker images.

Documentation for this process can be found here.

The post Customizing Wikibase config in the docker-compose example appeared first on Addshore.


Is MediaWiki still the main software used to run Wikipedia?

Published 8 Jun 2018 by Jean-Pierre Coffe in Newest questions tagged mediawiki - Stack Overflow.

I was wondering if Wikipedia still uses the last MediaWiki version or does it run on a heavily modified version ?

Especially regarding WikiText and the MediaWiki syntax, I wonder if they are different at all (asides from the templates).


An imperfect migration story

Published 8 Jun 2018 by Jenny Mitcham in Digital Archiving at the University of York.

Over the past six years as a digital archivist at the Borthwick Institute I have carried out a very very small amount of file migration. The focus here has been on getting things 'safe', backed up and documented (along with running a few tools to find out what exactly we have and ensure that what we have doesn't change).

I've been deliberately avoiding file migration because:

  1. there is little time to do this sort of stuff 
  2. we don't have a digital archiving system in place
  3. we don't have a means to record the PREMIS metadata about the migrations (and who wants to created PREMIS by hand?)


The catalyst for a file migration

Recently I had to update my work PC to Windows 10.

Whereas colleagues might be able to just set this upgrade off and get it done while they had lunch, I left myself a big chunk of time to try and manage the process. As a digital archivist I have downloaded and installed lots of tools to help me do my job - some I rely on quite heavily to help me ingest digital content, monitor files over time and understand the born digital archives that I work with.

So, I wanted to spend some time capturing all the information about the tools I use and how I have them set up before I can upgrade, and then more time post-upgrade to get them all installed and configured again.

...so with a bit of thought and preparation, everything should be fine...shouldn't it?

Well it turns out everything wasn't fine.


Backwards compatibility is not always guaranteed

One of the tools I rely on and have blogged about previously is Quick View Plus. I have been using Quick View Plus version 12 for the last 6 years and it is a great tool for viewing a range of files that I might not have the software to read otherwise.

In particular it was invaluable in allowing me to access and identify a set of WordStar 4.0 files from the Marks and Gran archive. These files were not accessible through any of the other software that was available to me (apart from in a version of WordStar I have installed on an old Windows 98 PC that I keep under my desk for special occasions).

But when I tried to install Quick View Plus 12 on my PC after upgrading to Windows 10 I discovered it was not compatible with Windows 10.

This was an opportunity to try out a newer version of the Quick View Plus software, so I duly downloaded an evaluation copy of Quick View Plus 2017. My first impressions were good. It seemed the tool had come along a bit in the last few years and there was some nice new functionality around the display of metadata (a potential big selling point for digital archivists).

However, when I tried to open some of the 120 or so WordStar files we have in our digital archive I discovered they were no longer supported.

They were no longer identified as WordStar 4.0.

They were no longer displaying correctly in the viewer.

They looked just like they do in a basic text processing application

...which isn't ideal because as described in the PRONOM record for WordStar 4.0 files:

"On the surface it's a plain text file, however the format 'shifts' the last byte of each word. Effectively it is 'flipping' the first bit of the ASCII character from 0 to 1. so a lower case 'r' (hex value 0x72) becomes 'ò' (hex value 0xF2); lower case 'd' (hex 0x64) becomes 'ä' (hex 0xE4) and so on."

This means that viewing a WordStar file in an application that doesn't interpret and decode this behaviour can be a bit taxing for the brain.

Having looked back at the product description for Quick View Plus 2017 I discovered that WordStar for DOS is one of their supported file formats. It seems this functionality had not been intentionally deprecated.

I emailed Avantstar Customer Technical Support to report this issue and with a bit of testing they confirmed my findings. However, they were not able to tell me whether this would be fixed or not in a future release.


A 'good enough' rescue

This prompted me to kick off a little rescue mission. Whilst we still had one or two computers in the building on Windows 7, I installed Quick View Plus 12 on one of them and started a colleague off on a basic file migration task to ensure we have a copy of the files that can be more easily accessed on current software.

A two-pronged attack using limited resources is described below:

Files were saved with the same names as the originals (including the use of SHOUTY 1980's upper case) but with new file extensions. Original file extensions were also captured in the names of these migrated files. This is because (as described in a previous post) users of early WordStar for DOS packages were encouraged to make use of the 3 character file extension to add additional contextual information related to the file (gulp!).

The methodology was fully documented and progress has been noted on a spreadsheet. In the absence of a system for me to record PREMIS metadata, all of this information will be stored alongside the migrated files in the digital archive.


Future work

We've still got some work to do. For example some spot checking against the original files in their native WordStar environment - I believe that the text has been captured well but that there are a few formatting issues that I'd like to investigate.

I'd also like to use VeraPDF to check whether the PDF/A files that we have created are actually valid (am keeping my fingers firmly crossed!).

This was possibly not the best thought out migration strategy but as there was little time available my focus was to come up with a methodology that was 'good enough' for enabling continued access to the content of these documents. Of course the original files are also retained and we can go back to these at any time to carry out further (better?) migrations in the future.*

In the meantime, a follow up e-mail from Avantstar Technical Support has given me an alternative solution. Apparently, Quick View Plus version 13 (which our current licence for version 12 enables us to install at no extra cost) is compatible with Windows 10 and will enable me to continue to view WordStar 4.0 files on my PC. Good news!



* I'm very interested in the work carried out at the National Library of New Zealand to convert WordStar to HTML and would be interested in exploring this approach at a later date if resources allow.

An imperfect migration story

Published 8 Jun 2018 by Jenny Mitcham in Digital Archiving at the University of York.

Over the past six years as a digital archivist at the Borthwick Institute I have carried out a very very small amount of file migration. The focus here has been on getting things 'safe', backed up and documented (along with running a few tools to find out what exactly we have and ensure that what we have doesn't change).

I've been deliberately avoiding file migration because:

  1. there is little time to do this sort of stuff 
  2. we don't have a digital archiving system in place
  3. we don't have a means to record the PREMIS metadata about the migrations (and who wants to created PREMIS by hand?)


The catalyst for a file migration

Recently I had to update my work PC to Windows 10.

Whereas colleagues might be able to just set this upgrade off and get it done while they had lunch, I left myself a big chunk of time to try and manage the process. As a digital archivist I have downloaded and installed lots of tools to help me do my job - some I rely on quite heavily to help me ingest digital content, monitor files over time and understand the born digital archives that I work with.

So, I wanted to spend some time capturing all the information about the tools I use and how I have them set up before I can upgrade, and then more time post-upgrade to get them all installed and configured again.

...so with a bit of thought and preparation, everything should be fine...shouldn't it?

Well it turns out everything wasn't fine.


Backwards compatibility is not always guaranteed

One of the tools I rely on and have blogged about previously is Quick View Plus. I have been using Quick View Plus version 12 for the last 6 years and it is a great tool for viewing a range of files that I might not have the software to read otherwise.

In particular it was invaluable in allowing me to access and identify a set of WordStar 4.0 files from the Marks and Gran archive. These files were not accessible through any of the other software that was available to me (apart from in a version of WordStar I have installed on an old Windows 98 PC that I keep under my desk for special occasions).

But when I tried to install Quick View Plus 12 on my PC after upgrading to Windows 10 I discovered it was not compatible with Windows 10.

This was an opportunity to try out a newer version of the Quick View Plus software, so I duly downloaded an evaluation copy of Quick View Plus 2017. My first impressions were good. It seemed the tool had come along a bit in the last few years and there was some nice new functionality around the display of metadata (a potential big selling point for digital archivists).

However, when I tried to open some of the 120 or so WordStar files we have in our digital archive I discovered they were no longer supported.

They were no longer identified as WordStar 4.0.

They were no longer displaying correctly in the viewer.

They looked just like they do in a basic text processing application

...which isn't ideal because as described in the PRONOM record for WordStar 4.0 files:

"On the surface it's a plain text file, however the format 'shifts' the last byte of each word. Effectively it is 'flipping' the first bit of the ASCII character from 0 to 1. so a lower case 'r' (hex value 0x72) becomes 'ò' (hex value 0xF2); lower case 'd' (hex 0x64) becomes 'ä' (hex 0xE4) and so on."

This means that viewing a WordStar file in an application that doesn't interpret and decode this behaviour can be a bit taxing for the brain.

Having looked back at the product description for Quick View Plus 2017 I discovered that WordStar for DOS is one of their supported file formats. It seems this functionality had not been intentionally deprecated.

I emailed Avantstar Customer Technical Support to report this issue and with a bit of testing they confirmed my findings. However, they were not able to tell me whether this would be fixed or not in a future release.


A 'good enough' rescue

This prompted me to kick off a little rescue mission. Whilst we still had one or two computers in the building on Windows 7, I installed Quick View Plus 12 on one of them and started a colleague off on a basic file migration task to ensure we have a copy of the files that can be more easily accessed on current software.

A two-pronged attack using limited resources is described below:

Files were saved with the same names as the originals (including the use of SHOUTY 1980's upper case) but with new file extensions. Original file extensions were also captured in the names of these migrated files. This is because (as described in a previous post) users of early WordStar for DOS packages were encouraged to make use of the 3 character file extension to add additional contextual information related to the file (gulp!).

The methodology was fully documented and progress has been noted on a spreadsheet. In the absence of a system for me to record PREMIS metadata, all of this information will be stored alongside the migrated files in the digital archive.


Future work

We've still got some work to do. For example some spot checking against the original files in their native WordStar environment - I believe that the text has been captured well but that there are a few formatting issues that I'd like to investigate.

I'd also like to use VeraPDF to check whether the PDF/A files that we have created are actually valid (am keeping my fingers firmly crossed!).

This was possibly not the best thought out migration strategy but as there was little time available my focus was to come up with a methodology that was 'good enough' for enabling continued access to the content of these documents. Of course the original files are also retained and we can go back to these at any time to carry out further (better?) migrations in the future.*

In the meantime, a follow up e-mail from Avantstar Technical Support has given me an alternative solution. Apparently, Quick View Plus version 13 (which our current licence for version 12 enables us to install at no extra cost) is compatible with Windows 10 and will enable me to continue to view WordStar 4.0 files on my PC. Good news!



* I'm very interested in the work carried out at the National Library of New Zealand to convert WordStar to HTML and would be interested in exploring this approach at a later date if resources allow.

Mediawiki file upload final steps fail

Published 8 Jun 2018 by arne in Newest questions tagged mediawiki - Server Fault.

I have a mediawiki set up on an IIS7 server. It was set up as described in the manual. I activated file upload as described here. I think I've got the folder permissions right: modify-read-write for the IIS_IUSRS group on the images subdir of the mediawiki install. My php.ini also allows file uploads.

When I try to upload a png image, I can see a temporary file with a name of phpF267.tmp being created in a temp folder that is obviously a png file (magic header). However, Mediawiki tells me that it was unable to open the lock file for mwstore://local-backend/local-public/f/f1/bla.png and does not copy the file. It does not even create the directories f/f1 in the images folder.

Any ideas what might be wrong?


Use mediawiki functions in code

Published 7 Jun 2018 by nduser in Newest questions tagged mediawiki - Stack Overflow.

I have mediawiki installed and configured on my machine. I want to use the parse() function in a php script I'm writing in order to convert xml from a file to html. I currently have it working using the meidawiki API, but I want to use mediawiki itself that I've installed, instead of calling the API and using that. How can I use the functions that the mediawiki install provides? (New to this whole thing)


How Orwell gave propaganda a bad name

Published 7 Jun 2018 by in New Humanist Articles and Posts.

Today, we associate propaganda with totalitarian states – but it also enables people to challenge power from below.

PHPWeekly June 7th 2018

Published 7 Jun 2018 by in PHP Weekly Archive Feed.

PHPWeekly June 7th 2018
Curated news all about PHP.  Here's the latest edition
PHP Weekly 7th June 2018
Welcome to the latest @phpweekly news.
 
We start this week with a new podcast - the PHP Web Development Podcast with Mathew Kimani. In the first episode Mathew covers current PHP statistics.
 
Also we take a look at using the toolkit Serverless Framework with OpenWhisk PHP.
 
The 20th Oscon takes place next month in Portland. Tickets are currently on sale, with the Early Price discount ending on the 8th June.
 
Plus, if you are interested in Eloquent, we have a new course on connecting to a database with Laravel's Eloquent ORM.
 
And finally, we have an article about why you should be contributing to open source projects in 2018.
 
Enjoy your read!
 
Cheers
Ade and Katie

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.

Articles

Top 100 PHP Functions
Here is a list of the most used PHP native functions, named and ranked from 1 to 100.

Why You Should Contribute to Open Source Projects in 2018
Open source can change your life. It has changed mine with Corcel, an Open Source project that I started in 2013 which changed who I am and how I live. Read this article to learn more the story of the project and how it became a passion for Open Source projects that is probably like yours.

PHP Developers: 4 Questions To Ask
In recent years, as the online industry has grown, PHP developers are becoming more important and sought after. It is important to know just what you are looking for in a developer and what a PHP programmer can do for your business. The language is seeing popularity steadily trend upwards with no signs of slowing down and the amount of developers that want to get started in the industry are innumerable. Here are 4 questions to ask to help find a PHP developer for your needs.

Tutorials and Talks

Build Your First Symfony Console Application with Dependency Injection Under 4 Files
Series about PHP CLI Apps continues with 3rd part about writing Symfony Console Application with Dependency Injection in the first place. Not last, not second, but the first. Luckily, is easy to start using it and very difficult to
 
Convert your Bootstrap CSS to Tailwind with Tailwindo
I’ve recently been working on converting one of my side projects from Bootstrap to Tailwind and came across awssat/tailwindo. This package does precisely that – automatically converts Bootstrap component classes to Tailwind utility classes.
 
Using Enums in Laravel
I'm a big fan of enums. Having recently worked for a company who use C#, where enums are used extensively, I've got used to being able to reach for them and miss them when they're not available. Futhermore I've created a Laravel package called laravel-enum which allows you access helper functions such as listing keys and values, attaching descriptions to values, and validating requests which are expecting enum values. This guide walks through the process of installing the Laravel package and includes examples of usage and best practice.
 
Using Serverless Framework with OpenWhisk PHP
Serverless Framework is a toolkit to help you manage and deploy a serverless application. (Personally, I’m not a fan of the name as the word “Serverless” already has a meaning in the same space!) It’s a useful tool and supports all the major providers, though AWS Lambda seems to be first-among-equals. The OpenWhisk plugin for Serverless is maintained by the rather excellent James Thomas, so if you have any questions, ping him! As I build more complex PHP based OpenWhisk applications, I thought I’d explore how Serverless makes this easier.
 
Debugging Intermittent Test Failures with Bash and PHPUnit
If you’ve been writing PHPUnit tests for long, you’ve probably run into a time when a test works 90% of the time, but every now and then it throws an unexpected error or failure. If it happens only rarely, you might just get around it by re-running your test suite, but if you’ve got a large test suite or intermittent failures become really common, you probably need to address the issue. Here’s a quick tip for debugging tests like this.
 
Integration with NEM using PHP
We will learn how to use NEM blockchain to create wallets. We will integrate NEM with Laravel Framework and build the web app. You should be familiar with making apps with Laravel framework and you need a fresh Laravel Installation.
 
Boost Your Website Performance With PhpFastCache
In this article, we're going to explore the PhpFastCache library, which allows you to implement caching in your PHP applications. Thus, it helps to improve overall website performance and page load times.
 
Serverless and PHP: Introducing Bref
Serverless basically means “Running apps without worrying about servers”. Obviously there are still servers involved, the main difference is that you do not maintain the servers and reserve their capacity. They are scaled up or down automatically and you pay only for what you use. This article intends to explain what serverless means for web applications and more specifically for PHP.
 
The Art of The Error Message
The concept of “embracing failure” is big in the tech industry. Fail fast, fail often! is almost an industry mantra. But there’s an everyday type of failure that doesn’t get much attention in the product development process. That’s right. The humble error message.
 
Creating Multiple Windows for Slack on Your Mac Using Single-Site Browsers
Tighten is a consultancy. That means we're not just a product company; we also work on other people's applications and sites. Frequently, one or more of our developers will be tasked to work with the same client for months. Every day they wake up, open up Slack--which is the primary tool Tighten, as a remote company, uses to build culture and relationships--and switch to the client's Slack.
 
Exakat 1.2.9 Review
Exakat 1.2.9 is out with a truck load of new analyzers. While we are preparing actively for IPC in Berlin and DPC in Amsterdam, we took time to add no less than 5 new analyzers : Flexible Heredoc Syntax for PHP 7.3, Use the blind var, Inexistent compact, Type hinted reference and Type hint / default mismatch . That’s going to be the longest Exakat 1.2.9 review.

News and Announcements

CakePHP Conference - June 14-17th 2018, Nashville
CakeFest is organised for developers, managers and interested newcomers alike. Bringing a world of unique skill and talent together in a celebration and learning environment around the worlds most popular PHP framework. Celebrating over eleven years of success in the PHP and web development community, CakePHP’s 2018 conference will be an event not to miss. Tickets are on sale now.

Oscon - July 16-19th 2018, Portland
OSCON is the complete convergence of the technologies transforming industries today, and the developers, engineers, and business leaders who make it happen.The 20th Open Source Convention takes place next July. From architecture and performance, to security and data, get expert full stack programming training in open source languages, tools, and techniques. Tickets are on sale now, with the Early Price discount ending tomorrow.

PHP Detroit Conference - July 26-28th 2018, Livonia
PHPDetroit is a two-day, regional PHP conference that brings the community together to learn and grow. We're preceding the conference with a 2 track tutorial day that will feature 4 sessions covering various topics. We will also be running an UnCon alongside the main tracks on Friday and Saturday, where attendees can share unscheduled talks. Tickets are on sale now.

PHP Developer Days - September 21st-22nd 2018, Dresden
After a very successful edition in 2017 we aim to push this community driven conference to the next level in 2018. For the first time we will offer a full day with workshops, so you can get the most out of our excellent trainers. On the second day our international speakers will provide you with great sessions in a single track. We are committed to creating a unique community experience - an event where everyone is among #PHPriends. 

Laracon AU - October 18-19th 2018, Sydney
Two days of learning and networking with the Laravel community in Australia for the first time. The two day conference will see us welcome some of the most prominent Laravel community members including Matt Stauffer, Adam Wathan, and the framework’s author Taylor Otwell as speakers alongside a host of terrific local speaking talent. Tickets are on sale now.

Podcasts

Three Devs and a Maybe Podcast - Software Design with Scott Wlaschin
In this weeks episode we are lucky to have Scott Wlaschin back on the show to discuss design within software. We start off by highlighting leaky abstractions, adopted tool-chains and transpiling languages into JavaScript. From here we move on to talk about what makes ‘good code’, and how evaluating this is heavily reliant on the requirements and context you are in. Finally, we discuss how OO and FP software architectures differ, advantages of being explicit over implicit and information-hiding at API boundaries.
 
This week Cal spoke to Brett Florio and the Foxy.io crew about the history behind the company and the roles each of them has within it.
 
Upon deeper dive into the release of Magento 2.2.4 features Kalen and Phillip as the all-important question: how much bundling of 3rd party integrations in Magento is too much? Is there a line and does the 2.2.4 release cross it? Listen now!
 
PHP Roundtable Podcast Episode 72: Secret Project Revealed!
We finally unveil the super-secret project to the world! Listen in to find out what it is and how you can get your hands on one.
 
PHP Ugly Podcast #107:  Drugs, Tattoos, and Coding (aka PHP: Pot Head Programming)
Topics include solving a 43Mb favourites list and testing FTW.
 
North Meets South Web Podcast Episode 45: Event Sourcing, Auditing and Finite State Machines
Jake and Michael make their return to discuss event sourcing, auditing and reporting, and finite state machines after a busy real life schedule kept them away from recording for a month!
 
PHP Web Development Podcast Ep #1 - PHP Statistics
In this week's episode, Mathew covers current PHP statistics. He will hightlight why developing relationships with good agencies is key to finding good jobs and especially contract and remote jobs.

Reading and Viewing

ZendCon & OpenEnterprise 2018
In case you missed it, ZendCon is coming back to town - and it's going to be quite an exciting one.
 
PHP in 2018
PHP in 2018 is a talk by PHP creator Rasmus Lerdorf, which focuses on new features in PHP 7.2 and 7.3. We have some exciting low-level performance wins coming to PHP 7.3, which should be out late 2018. It’s highly encouraging that PHP’s focus is mainly on performance in the PHP 7.x releases.
 
Building a PHP Framework: Part 3 – Time For Action
In Part 2 of this series I discussed what web frameworks are and, in (very) broad terms, how they worked. Now it’s time to take the first step toward actually building a framework.
 
New Course: Connect to a Database With Laravel's Eloquent ORM
In our new course, Connect to a Database With Laravel's Eloquent ORM, you'll learn all about Eloquent, which makes it easy to connect to relational data in a database and work with it using object-oriented models in your Laravel app. It is simple to set up, easy to use, and packs a lot of power.
 
Cloudways Interview: Dwight Walker on the Challenges of Running a Web Agency
Many of our interviews are focused on digital agency owners, who build websites for clients using specific technologies and tools. Today, however, we wanted to highlight the challenges involved in running a web development agency and the future of PHP.

Jobs





Do you have a position that you would like to fill? PHP Weekly is ideal for targeting developers and the cost is only $50/week for an advert.  Please let me know if you are interested by emailing me at katie@phpweekly.com

Interesting Projects, Tools and Libraries

omeka-s
Omeka S is a web publication system for universities, galleries, libraries, archives, and museums. It consists of a local network of independently curated exhibits sharing a collaboratively built pool of items, media, and their metadata.
 
composer-patches
Simple patches plugin for Composer. Applies a patch from a local or remote file to any package required with composer.
 
phpboost
This web application allows everybody without any particular knowledge required in webmastering to create his own website.
 
applicationinsights-php
This project extends the Application Insights API surface to support PHP.
 
freshrss
FreshRSS is a self-hosted RSS feed aggregator. It is at the same time lightweight, easy to work with, powerful and customisable.
 
unit3d
The Nex-Gen Private Torrent Tracker (Aimed For Movie /TV Use).
 
rollbar-php
This library detects errors and exceptions in your application and reports them to Rollbar for alerts, reporting, and analysis.
 
forms
Generating, validating and processing secure forms in PHP. Handy API, fully customisable, server & client side validation and mature design.
 
crypt-gpg
Crypt_GPG is a PHP package to interact with the GNU Privacy Guard (GnuPG).
 
slim-session
A very simple session middleware for Slim Framework 3.
 
projectsend
A free, open source software that lets you share files with your clients, focused on ease of use and privacy. It supports clients groups, system users roles, statistics, multiple languages, detailed logs... and much more!
 
openemr
The most popular open source electronic health records and medical practice management solution. ONC certified with international usage, OpenEMR's goal is a superior alternative to its proprietary counterparts.
 
witycms
A lightweight Content Management System (CMS) in PHP, Model-View-Controller oriented.
 
crunz
Crunz is a framework-agnostic package to schedule periodic tasks (cron jobs) in PHP using a fluent API.

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.
 

So, how did you like this issue?

Like us on FacebookFollow us on Twitter
We are still trying to grow our list. If you find PHP Weekly useful please tweet about us! Thanks.
Also, if you have a site or blog related to PHP then please link through to our site.

unsubscribe from this list | update subscription preferences 
 
Copyright © 2018 PHP Weekly, All rights reserved.
Email Marketing Powered by MailChimp

You Can Now Automatically Format and Mount Block Storage Volumes

Published 6 Jun 2018 by Priya Chakravarthi in The DigitalOcean Blog.

You Can Now Automatically Format and Mount Block Storage Volumes

Since we launched Block Storage Volumes in 2016, we noticed users searching for our tutorials on partitioning and formatting storage devices and volumes in Linux. At the same time, some users were accidentally formatting already pre-formatted volumes due to the manual process involved in setting up.

This was a cue for us to reduce the friction in the user experience and allow users to simply click to add storage to their Droplets. For example, when you attach a thumb drive to your computer it just works and is ready for use instantly. Why should attaching a volume to your Droplet be different?

In late May, we launched the “automatically format and mount” feature across all regions that support Block Storage Volumes. With this new feature, we reduce human errors and speed up the process of attaching external storage to your Droplets.

You Can Now Automatically Format and Mount Block Storage Volumes

This feature is supported, using the DigitalOcean control panel or API, for Droplets running the following operating systems:

DigitalOcean users can select between two popular Linux filesystems, Ext4 or XFS, for formatting their volumes. To get things going, we provide default mount options and use a default mount point corresponding to your volume name.

To customize these mount options, you can SSH into your Droplet and run commands specific to your Linux distribution. If your company or application dictates the use of a specific filesystem that is not currently supported, or you want to control your mount options or name, our in-product instructions are now customized to the operating system version and can be copied and executed as-is.

You Can Now Automatically Format and Mount Block Storage Volumes

Block Storage Volumes provide the same baseline performance for all sizes, which makes them a great fit for the majority of use cases that require attached storage. (ICYMI, we recently detailed some of the performance improvements we’ve made.) With the new “automatically format and mount” feature, adding high performance block storage becomes a breeze.

Ready to try this out? Add a volume to your Droplet now.


PWG Takes Toronto

Published 6 Jun 2018 by Tzviya Siegman in W3C Blog.

Minutes are published at https://www.w3.org/publishing/groups/publ-wg/Meetings/Minutes/2018/2018-05-30-pwg.html and https://www.w3.org/publishing/groups/publ-wg/Meetings/Minutes/2018/2018-05-31-pwg.html.

The Publishing Working Group gathered in Toronto, Ontario for a 2-day face to face meeting at the Kobo office. This was Kobo’s first time hosting a working group meeting, and they did a great job. Thanks especially to Wendy Reid who organized hotel discounts, restaurant reservations, conference rooms, and many other tiny details. She also learned to scribe :).

Our main goal for the meeting was to get the WP Draft and GitHub repo into a position such that we will be able to release an updated draft within about one month. We made a lot of decisions about both major and minor issues that are getting us closer to Web Publications.

Replacing epub:type

Tzviya Siegman offered a history of the IDPF’s namespaced epub:type structural semantics vocabulary. Many key terms have shifted to DPUB-ARIA. We asked participants to offer information about how epub:type is used today. There were a few examples of uses for internal workflows, such as providing meaningful information about book structure for repurposing content. The implementors in the room made it pretty clear that the only terms in the very long list that they look at are toc, landmarks, pagebreak, pagelist, footnote, and noteref. Consensus is that DPUB-ARIA covers use cases that are beyond internal workflows, but there is a need for education about how to correctly implement ARIA.

Infoset

We reached consensus the minimum infoset. Luc Audrain reviewed the information set as it stands in the current draft, and we agreed that this is a good starting point. We have agreed that we will work in JSON-LD with schema.org as a starting point. We concluded that it is not necessary to list an exhaustive set of resources to define the bounds of the publication, which is defined by the default reading order. We will add a section about boundary determination to the spec. While it is very important to address privacy concerns in our specs, including a privacy policy in the minimum infoset will accomplish nothing. David Wood will add language to the specification regarding privacy and what a UA is expected to do when it encounters a privacy policy.

Manifest Serialization

We have long divided our manifest into descriptive and structural properties. We chose to focus on schema.org for descriptive properties. Using schema.org brings us to the decision to opt for JSON-LD. We discussed specifics of expressing metadata as schema.org properties using a table that Ivan Herman drafted. We will review similar work at Readium, Wiley, and the ScholarlyHTML Community Group to come up with recommendations. There was extensive discussion about topics such as how to ensure accessibility in a cover expressed as JSON-LD. There was much discussion about direction of metadata, titles, and other details to be discussed with the schema.org CG.

Affordances and Use Cases

Benjamin Young explained, “Use cases are a story of what tasks we want to do. Affordances are the things in a document that allow those stories to happen. Mugs afford holding things.” We agreed that we will provide information that allows for affordances, but we will not document such affordances. An example of this is the information of this is specifying the metadata required for a bookshelf but not specify how to create a bookshelf. We then moved through the open issues on Affordances in GitHub and assigned them to members. Each person will use the existing template from the Affordances task force and provide language to add to the specification by 15 June.

Structural Properties

We agreed that URLs in the “manifest” can be represented as either JSON strings or objects, where the string is interpreted directly as a URL. A list of URLs can thus be represented by an array of either strings or objects.

Default Reading Order and TOC

There was extensive discussion about whether the current spec language is clear enough about default reading order and whether it is better and/or good enough for both humans and machines. We eventually resolved that:

(I may have gotten some of these details wrong.) Significant details to come as we work on the spec.

PWP/EPUB 4

We began the discussion asking what the differences in an infoset for PWP as opposed to WP might be. We drifted to a discussion of whether PWP is necessary as a standalone specification if EPUB 4 is simply a packaged WP. To be determined still is whether the packaging format will be the Web Packaging Format in incubation or some other format, such as a simple zip.

We ended the day assigning issues to individuals and planning to revise our documentation over the coming weeks. Thanks to all who traveled and participated, especially Kobo and their glorious doughnuts.

a group of smiling people and a guide dog stands in front of a brick wall nest to a table with computers on it.

Publishing Working Group at the Kobo office


How to use the Mediawiki API to create pages with longform texts and lists in python

Published 6 Jun 2018 by sc4s2cg in Newest questions tagged mediawiki - Stack Overflow.

I recently learned how to scrape my mom's recipes from a cooking website. My current goal is to put those recipes into a self-hosted mediawiki server. Since all I know is python, I'm trying to use GET and POST requests and the API to create these pages. I've tried the various python scripts, like pywikibot, mwclient, and wptools to various forms of success. Documentation is really lacking for the latter two when it comes to editing/creating wiki pages, and pywikibot has some bugs (reported) that prevent me from logging on or using the pagefromfile.py script.

Luckily, there is a sample python code on mediawiki website.

username = 'myusername'
password = 'mypassword' # see https://www.mediawiki.org/wiki/Manual:Bot_passwords
api_url = 'https://my.wiki.com/api.php'
section = 'new'
sectiontitle = 'Ingredients'
summary = 'ingredients'

message = {" \n\u2022 6 db óriási nyers padlizsán <br>"
    +"\n\u2022 4 db édes, húsos piros paprika, egészben <br>"
    +"\n\u2022 3 db fekete paradicsom, vastagabb karikára szelve <br>"
    +"\n\u2022 1 db zöld jalapeno paprika, egészben <br>"
    +"\n\u2022 2 db nagy vöröshagyma, vastagabb karikára vágva <br>"
    +"\n\u2022 10 cikk fokhagyma <br>"
    +"\n\u2022 1 ek édes piros paprika <br>"
    +"\n\u2022 ízlés szerint só <br>"
    +"\n\u2022 ízlés szerint bors <br>"
    }
page = 'Test'

This code creates a page with the relevant section and message, it looks like this.

Questions:

  1. How can I create more than one sectiontitle?
  2. If I put in wiki code, how come the mediawiki doesn't format it? For example, if I make the message "# 6db oriasi nyers" then mediawiki will create a message with "# 6db oriasi nyers" instead of "1. 6db oriasi nyers".

new comment (near -32.024, 115.757)

Published 5 Jun 2018 by Sam Wilson in OpenStreetMap Notes.

Comment

Updated 19 days ago by Sam Wilson
Road added, but I'm not sure what it's called here between West Meath and McCabe streets.

Full note

Created 8 months ago
Mathieson ave Is linked in a loop
Updated 5 months ago
Mathieson is also linked at this end to McCabe and West Meath Streets. Check the imagery.
Updated 19 days ago by Sam Wilson
Road added, but I'm not sure what it's called here between West Meath and McCabe streets.

W3C at TU Update

Published 5 Jun 2018 by Ted Guild in W3C Blog.

W3C is pleased to again be participating in TU-Automotive this year. The conference takes place in Novi, Michigan on June 6th and 7th. W3C’s Automotive Lead, Ted Guild, will be participating on two panels.

The annual event covers a range of topics on the future of technology in the automotive industry including Cybersecurity, Electrification, Autonomous vehicles, Smart Cities and Emerging business models these advances make possible.

Collaborating on Standards & Best Practices

“The industry is learning that no one company can solve cybersecurity; with such an intertwined supply chain & global implications if something goes wrong, collaborate to deliver safe & secure vehicles.

W3C has a long track record of creating successful standards, guidelines and best practices for the Open Web Platform. We approach standards from both individual technology viewpoints and specific industry focuses. Ted leads the automotive activity at W3C where we are defining a robust, in-vehicle application ecosystem. Modern vehicles are comprised of many small electronic control units, essentially small computers controlling different functionality, on a local network and providing considerable, digital information.

Initially people are surprised to learn W3C is doing standards for automotive. This should be of less surprise when you consider how there are more web developers writing applications than for any other platform and the automotive industry wants to attract content and service providers to their platform, most of whom are already creating applications using web technologies.

While exposing this telematics information in a consistent manner has been the initial, main focus the group has also been working on media services and libraries, CDN, notifications, location based services, privacy and security, leveraging W3C and others standards for solving automotive big data needs and exploring applying W3C Web Payments specification to automotive use cases such as fueling, recharging, tolls, parking and services.

W3C knows well the benefits of taking a standards approach. Common standards enable innovation, interoperability, creates new business paradigms and there is little arguing how the web has already been transformative across all industries.

V2X: Ensuring Secure Communications

“In the future, vehicles will talk to everything. Ensure that communication channels are secure.

W3C is headquartered at Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Labs. There Ted has been working with others on connected vehicle cybersecurity as a compliment to the W3C automotive standards work and looking to form a research group with proposed focuses on applications, network interactions and data integrity.

W3C’s vision for “One Web” brings together thousands of dedicated technologists representing more than 400 Member organizations and dozens of industry sectors. W3C is jointly hosted by the MIT Computer Science and Artificial Intelligence Laboratory (MIT CSAIL) in the United States, the European Research Consortium for Informatics and Mathematics (ERCIM) headquartered in France, Keio University in Japan and Beihang University in China. For more information see https://www.w3.org/.


WebDriver motors on to W3C Recommendation

Published 5 Jun 2018 by Michael[tm] Smith in W3C Blog.

Today we celebrate the publication of the WebDriver specification as a W3C Recommendation.

WebDriver is a powerful technology for browser automation, often used to enable cross-browser testing of Web applications, but also used for many other purposes.

The WebDriver spec defines a set of interfaces and a wire protocol that are platform- and language-neutral and that allow out-of-process programs to remotely control a browser in a way that emulates the actions of a real person using the browser.

WebDriver is widely used day-to-day by Web developers around the world to drive testing of their Web applications, and to ensure that they work across multiple browsers. It is also used for cross-browser testing by browser vendors as part of the web-platform-tests effort, in order to catch and eliminate browser incompatibilities before they ship.

There are already implementations of the WebDriver standard available for every major desktop browser, and language bindings are offered by a number of projects, notably Selenium.

Having a standard way to automate interaction with a browser — a way that works across different browsers and browser engines — is a big win for Web developers in helping ensure their Web applications work in the best way they should for their users.


WCAG 2.1 is a W3C Recommendation

Published 5 Jun 2018 by Andrew Kirkpatrick in W3C Blog.

By Andrew Kirkpatrick and Michael Cooper

Web Content Accessibility Guidelines (WCAG) 2.1 is now a W3C Recommendation. This is an evolution of W3C’s accessibility guidance, including expansion of mobile, low vision, and cognitive and learning provisions. It maintains W3C’s accessibility guidance, while maintaining W3C’s standard of implementable, technology neutral, objectively testable and universally applicable accessibility guidance.

Publication as a W3C Recommendation finalizes the development process and indicates that the W3C considers the updated guidelines ready for implementation on web content. A WCAG 2.1 press release is available.

New support

For users of mobile devices, WCAG 2.1 provides updated guidance including support for user interactions using touch, handling more complex gestures, and for avoiding unintended activation of an interface. For users with low vision, WCAG 2.1 extends contrast requirements to graphics, and introduces new requirements for text and layout customization to support better visual perception of web content and controls. For users with cognitive, language, and learning disabilities, WCAG 2.1 improvements include a requirement to provide information about the specific purpose of input controls, as well as additional requirements to support timeouts due to inactivity. This can help many users better understand web content and how to successfully interact with it.

As with WCAG 2.0, following these guidelines will continue to make content more accessible to a wider range of people with disabilities, including blindness and low vision, deafness and hearing loss, limited movement, speech disabilities, photosensitivity, and learning disabilities and cognitive limitations. Following these guidelines can also make websites more usable for all users.

Transition from WCAG 2.0

WCAG 2.0 remains a W3C Recommendation. It was designed to be a highly stable, technology-agnostic standard, with informative supporting resources. The Working Group has taken care to maintain backwards compatibility of WCAG 2.1 with WCAG 2.0. All the criteria from WCAG 2.0 are included in WCAG 2.1, so web sites that conform to WCAG 2.1 will also conform to WCAG 2.0. As with WCAG 2.0, WCAG 2.1 will be supported by an extensive library of implementation techniques and educational materials, including Understanding WCAG 2.1 and Techniques for WCAG 2.1. These resources have been redesigned and moved from their previous locations to allow the Working Group to update them on an ongoing, instead of periodic, basis.

W3C encourages organizations and individuals to use WCAG 2.1 in web content and applications, and to consider WCAG 2.1 when updating or developing new policies, in order to better address the needs of more web and mobile users with disabilities.

Process and timeline

The Accessibility Guidelines Working Group (AG WG) met an ambitious timeline and completed the work on schedule. Previously, WCAG 2.0 had taken many years to develop, partly because of its goals to be both technology neutral and future-proofed. For many years after the completion of WCAG 2.0, the Working Group focused on supporting those guidelines through updates to Understanding and Techniques. Over time, though, new technologies and use cases emerged which, while still within the scope of WCAG 2.0, may not have been directly addressed.

To better address a range of issues, the Working Group began to explore updated guidance initially on extensions, and then shifted to a full-fledged dot-release. By this time, the need for new guidance, particularly to address the needs of users of mobile devices, users with low vision, and users with cognitive or learning disabilities, had become more urgent. A timeline was set for WCAG 2.1 that would allow its guidance to be finalized in 18 months, and requirements set to keep its new success criteria within the established WCAG 2.0 framework.

Initial proposals for new success criteria were developed by the Working Group, including task forces focusing on specific areas, which also considered many suggestions for improvement that had been submitted by the public over the years. Once an initial set of proposals was established, the Working Group considered how to incorporate them into the guidelines. Candidate success criteria needed to be clear, realistic both to implement and to evaluate, useful to users, and non-redundant. These characteristics are determined by consensus of the Working Group after careful scrutiny and evaluation. This high bar means that many good suggestions needed to be deferred to future versions of guidelines in order either to await technology advances or provide more time to refine the guidance.

Ultimately, 17 new success criteria were added to WCAG 2.1. Once the final set of success criteria were chosen, they were tested in implementations across different types of websites and web content to ensure they were implementable. We want to thank the implementers worked hard on a short timeline to help the Working Group demonstrate implementability of the success criteria, including ones that were at risk.

Future efforts

Many people hoped WCAG 2.1 would provide more new guidance than it does. The requirement of compatibility with WCAG 2.0 along with the aggressive timeline limited what could confidently be added to it. WCAG 2.1 provides important and timely guidance but is still only a step—the Working Group expects to develop another dot-release, WCAG 2.2, to expand the new coverage even further. WCAG 2.2 may be developed under a similar timeline and requirements set than WCAG 2.1 was, though we plan to refine the process to address process challenges experienced during the development of WCAG 2.1.

In addition to a further dot-release of WCAG 2, the Accessibility Guidelines Working Group has been working in parallel on a more major revision to accessibility guidelines, which would not have the same structure as WCAG 2. Beyond web content, these new guidelines are intended to incorporate guidance for user agents and other tools so requirements that depend on tool support are more clear for authors, and address issues of conformance and testability in a different way from WCAG 2. This is a major multi-year project, which is the reason additional updates to WCAG 2 are needed in the meantime. The plan for new accessibility guidelines (which go beyond simply web content) are still being shaped and while not formally named, the project has been code-named “Silver”. Development has been taking place in the Accessibility Guidelines Working Group via the Silver Task Force, in close collaboration with the Silver Community Group to support broader participation and incubation. Input from broad perspectives is critical to this work, and people representing a broad set of stakeholder groups, including those who work in non-English language environments, are invited to participate.

Contribution

Although the completion of WCAG 2.1 is a major milestone, there is obviously plenty of additional work to do. In addition, W3C is coordinating with national and international regions updating their standards and policies, including the current update of the European Norm (EN) 301 549, and discussions to update the Web accessibility standard in China. The work depends on input from participants and public comments.

The Accessibility Guidelines Working Group participation page provides general information about how to participate in the Working Group, and the instructions for commenting on WCAG 2 documents provides information about how to comment on ongoing work. We hope WCAG 2.1 is received as a useful update to web content accessibility guidance and look forward to collaboration on development of further updates.


The search for “British” rights

Published 5 Jun 2018 by in New Humanist Articles and Posts.

In attempting to dismantle human rights laws allegedly imposed by Brussels, we may find we need them more than ever

git.legoktm.com registration now open

Published 3 Jun 2018 by legoktm in The Lego Mirror.

git.legoktm.com is now open for hosting free software projects, providing git hosting, issue trackers, and basic wiki functionality. It runs the free software Gogs: "a painless self-hosted Git service".

You're welcome to host any freely licensed projects on git.legoktm.com.

I've been running git.legoktm.com for two years now, mainly using it to host personal projects and things a few friends asked for. I think others will find a friendly git hosting service useful. Nearly all of my major projects can be found on git.legoktm.com, whether they are the canonical repository, or just a mirror (automatically sychronizing about every 10 minutes).

In terms of privacy, you will need to confirm your email before being able to join the site. I collect minimal server logs, and only use IP address information for anti-abuse measures. Mail is handled by FastMail. You should be able to delete your data at any time. Backups are currently running weekly, but I can increase frequency if usage/demand increases.

If you're comfortable with your current git host, feel free to set up a mirror! Git is a great distributed protocol, and mirroring helps increase the right to fork.

If you have any questions, you can contact me on Mastodon, email, IRC, or the git.legoktm.com support tracker.


Mediawiki - How do I change which page is the main page?

Published 3 Jun 2018 by Juan Olivier in Newest questions tagged mediawiki - Stack Overflow.

I have setup my own Madiawiki for a project. I would like to change the main page but the only thing the Mediawiki FAQ say is:

“By default, MediaWiki looks for a page with the title Main Page and serves this as the default page. This can be changed by altering the contents of MediaWiki:Mainpage to point to a different title. If this does not change the 'Main Page' link included on the sidebar at install time, edit MediaWiki:Sidebar.”

The problem is I do not know where to edit this “MediaWiki:Mainpage”. Where do I find this line to edit it? Also if I make a page with the name “x”, and I want it to be the main page do I then change “MediaWiki:Mainpage” to “MediaWiki:x”?


cardiCast episode 32 Madeline Veitch

Published 3 Jun 2018 by Justine in newCardigan.

newCardigan interviews …

Madeline Veitch

Clare Presser in conversation with Madeline Veitch – Research, Metadata, and Zine Librarian at SUNY New Paltz – in New York for the Art Libraries Society of North America (ARLIS/NA) conference ”Out of Bounds”.

Clare and Madeline discuss all things Zine!

For more information about ARLIS/NA:

@ARLIS_NA

arlisna.org

newcardigan.org
glamblogs.newcardigan.org

Music by Professor Kliq ‘Work at night’ Movements EP.
Sourced from Free Music Archive under a Creative Commons licence.


DDD for Night Owls

Published 2 Jun 2018 by Rob Crowley in DDD Perth - Medium.

Rimma Shafikova wowing the crowd at DDD by Night (DDD by Night Flickr Album)

As a member of the DDD Perth organising committee, it is with a significant degree of pride that I look back and see how the conference has grown and evolved over the past number of years. From the inaugural event held back in 2015 with mainly developer focused content, we have seen the number of attendees almost double year on year and the topics become more diverse to encompass all aspects of software delivery. As an organising committee, we deeply believe in the importance of championing diversity and inclusion in order to grow an even stronger Perth IT community. This year we have taken a number of measures to improve this further, such as providing day care facilities and redefining our Code of Conduct to ensure we create an environment in which everyone feels safe to be themselves. As you can no doubt appreciate, a huge amount of effort goes into organising a conference such as DDD Perth, and it is really a labour of love for all involved. It is far from a selfless act however, the feeling you get when everything comes together on conference day is massively exhilarating. Dare I say even addictive 😊

In 2017 we had over one hundred submissions in response to our call for proposals. While this number was gratifying in and of itself, we as an organising committee were truly thrilled by the variety of the topics and diversity of our wonderful submitters. One of the unique aspects of DDD is that the agenda is democratically chosen by the community. This however does not prevent the organising committee from sharing both the elation and disappointment of our submitters should their session be voted in or narrowly miss out. This was particularly agonizing last year as the quality of submissions was universally excellent (and seeing those that have come through so far, 2018 is going to take it into in the stratosphere). After taking some time to relax after the frantic conference period, the organising committee sat down late last year and considered approaches we could take to showcase some of the speakers that did not have an opportunity to present at the main conference. DDD by Night was born. DDD by Night events are effectively mini DDDs that will be held a number of times throughout the year. Each will see curated topics aligning to our goals of providing diverse thinking, content and opinions to the Perth IT Community. As an added bonus, they also provide another dose of community driven positivity to the DDD crew!

The first DDD by Night event was held in May and saw three amazing talks by three equally amazing speakers. First up, was Mandy Michael, Fenders Perth founder, CSS aficionado and variable font evangelist (I could go on but will stop there, if you couldn’t tell we think Mandy is fantastic!) talking about why CSS is awesome. Next up, was the equally impressive Rimma Shafikova teaching us how we can leverage the capabilities of graph databases to improve the efficiency with which we can learn Mandarin. Rimma’s ability to explain complex topics in an engaging and simple manner was massively impressive. We rounded off the night, with Tony Morris giving a very gentle (thanks Tony!) and engaging introduction to functional programming with Haskell. The recordings and videos from the event are available on YouTube and Flickr respectively, and if you love the content as much as us then please subscribe.

DDD Perth has come a long way in the past three years and the DDD by Night events represent another step on this journey, one that we are incredibly excited about. We look forward to seeing you at the next one.


DDD for Night Owls was originally published in DDD Perth on Medium, where people are continuing the conversation by highlighting and responding to this story.


How to find Pipe with Native Search Engine?

Published 2 Jun 2018 by johny why in Newest questions tagged mediawiki - Stack Overflow.

i'm performing a string search using url API. My string contains "|", but the API is interpreting it as a magic character instead of part of my search string. Eg: i want to find the verbatim text MyPrefix|CommonGround, but the two strings are not getting searched as a concatenated block.

https://gunretort.xyz/api.php?action=query&list=search&srsearch=MyPrefix|CommonGround&srwhat=text

(note, urls in this question may not load in your browser. They are intended only to show my url syntax)

i've tried substituting | with "{{!}}", but results are the same.

https://gunretort.xyz/api.php?action=query&list=search&srsearch=MyPrefix{{!}}CommonGround&srwhat=text

Doc says to use as value separator, if i need to use bar as a string.

https://gunretort.xyz/api.php?action=query&list=search&srsearch=<US>MyPrefix|Terrorism&srwhat=text&srnamespace=<US>0<US>3000&format=xml

but getting:

Unrecognized value for parameter "srnamespace": 03000

Also attempting to substitute the html-decoded version &#31;,

https://gunretort.xyz/api.php?action=query&list=search&srsearch=&#31;MyPrefix|Terrorism&srwhat=text&srnamespace=&#31;0&#31;3000&format=xml

but not recognized:

{ "error": { "code": "nosearch", "info": "The \"search\" parameter must be set." } }

tried escaping using percent encoding (e.g. Use %1F in place of ). It didn't work:

Warning: Skipping bad option 'MyPrefix%1FCommonGround' for parameter 'uses'.

Tried to prefix it with the separator. gives an error with API.

https://gunretort.xyz/api.php?action=query&list=search&srsearch=%1FNewTag%1FSpinach&srwhat=text&srnamespace=0|3000&format=xml

returns

Doc says srsearch is not a multivalued parameter - it can only take one value which is passed to the search engine. But, i have found the search engine treats the "|" character as an AND. Maybe some layer is misinterpreting the parameter.

see https://phabricator.wikimedia.org/T194016

it seems that an embedded pipe is not being treated as part of the string. see https://phabricator.wikimedia.org/T194039

Perhaps if we can determine which layer is reacting to the pipe, we can figure out a way to escape it. But, theoretically, if srsearch doesn't support multivalues, then i think it is a bug if an embedded pipe breaks srsearch.

i tried escaping the pipe as %7C

https://gunretort.xyz/api.php?action=query&list=search&srsearch=NewTag%7CAnteater&srwhat=text&srnamespace=0|3000

It's matching pages which contain:

NewTag x Anteater

and

Anteater banana NewTag

but should only match the page containing

NewTag|Anteater

I tried to surround entire string with quotes (actual, or url-encoded).

https://gunretort.xyz/api.php?action=query&list=search&srsearch=%22NewTag%7CAnteater%22&srwhat=text&srnamespace=0|3000|3004

https://gunretort.xyz/api.php?action=query&list=search&srsearch="NewTag|Anteater"&srwhat=text&srnamespace=0|3000|3004 

finds both:

NewTag|Anteater

and

NewTag Anteater

We don't want to find the second one (pipe interpreted as space).

Possible solutions: concatenation character? extension:elasticSearch?


How to Search for Transclusion Parameter with DPL3?

Published 2 Jun 2018 by johny why in Newest questions tagged mediawiki - Stack Overflow.

DPL3 can search for transclusions based on Template name. Eg:

{{Fruit|Banana}}

will be found by

...uses=Fruit...

https://help.gamepedia.com/DPL:Parameters:_Criteria_for_Page_Selection#uses

But, how can i find only transclusions of fruit which pass the "Banana" parameter? Eg, ignore this:

{{Fruit|Apple}}

How to Search for non-included strings with DPL3?

Published 2 Jun 2018 by johny why in Newest questions tagged mediawiki - Stack Overflow.

It seems DPL3 cannot search for any string. It seems content-based searches are only based on inclusion in DPL output.

https://help.gamepedia.com/DPL:Parameters:_Criteria_for_Page_Selection#Select_articles_based_on_CONTENTS

Any way to search for text in page contents, regardless of whether included or not?


How to Create Local WikiText Variable in MediaWiki Template?

Published 2 Jun 2018 by johny why in Newest questions tagged mediawiki - Stack Overflow.

How to create a temporary local variable using wikitext, in a MediaWiki template?

Example:

MyVar = "Banana"

{{#if: {{{2}}}
|<small>MyVar</small>
|'''MyVar'''
}}

This extension ought to do the trick, but wondering if there's a native method.


GLAM Blog Club – June 2018

Published 1 Jun 2018 by Nik McGrath in newCardigan.

In May our theme was Passion.

“Editing Wikipedia articles is a great outlet for my passion for advocating for social justice and inclusion by facilitating access to knowledge/research and promoting the power of play in libraries” – Clare gives a guide to how to get into Wikipedia editing in ’Shut Up and Wiki’ about your passions and hone your passion for facilitating access to diverse knowledge.

Andrew is passionate about the death positivity movement. “Death is a significant part of our profession from record keeping to historical sites and human remains, we need to (if we’re not already) become death positive. Death positivity enables us to reflect on how we handle death in our collections and exhibitions. Do we shy away from stories of death or do we embrace them?” – The GLAMR of Death.

The Specialist’s Passion rubbed off on Clare working with a botany and horticulture collection. “I wonder how many Librarians have assimilated their passions to the collections they are working with, or even the other way around. I mean, surely we all gravitate to the things we love/are passionate about?”

Anne shares her “tips for finding purpose through your passions” in Purpose is passion to a happier librarian.

Alissa is passionate about librarianship above everything else, but acknowledges, “…I owe it to those who’ve helped me get this far to not burn out in a fit of passion” – Sometimes you’ve got to take the hardest line.

Passion guides me, Nathan states that “…the main reason I believe I can work across GLAM is because of my passion and goals”. Nathan shares a vision statement he wrote three years ago which guides him in his work in the GLAM sector.

Lydia’s Some brief reflections on #ICHORA8 / #GLAMblogclub shares her passion for her profession and professional development through her thoughts about the 8th International Conference on the History of Records and Archives.

Hugh is passionate “…about cataloguing, and the consequences of devaluing and unseeing the labour of cataloguers and other metadata experts” in Breaking Things.

Library Snoozer in GLAM Blog Club – Passion shares the passion, perhaps obsession, of “a pre-teen boy and his gaming”.

Sarah’s Passion: a slow burn, or a fiery inferno?: “I see many displays of GLAMourous passion, in people doing amazing things for and with their communities and colleagues, as well as in the active blogging and Twittering communities. Yet a conversation on Twitter this week prompted me to consider the underlying of risk of burnout because the passion was overwhelming, becoming a fiery inferno that destroyed the house instead of being a slow burn keeping the hearth cosy. As an industry if we expect passion, we should only accept a sustainable amount of it.”

Ragamouf’s First, play this song loudly. It’s best in your office explores the author’s past and passion for Gary Wright’s ‘DreamWeaver’ and libraries and learning things.

Clare wrote a second blog about passion this month, Passion and creativity in librarianship and beyond: “It’s been a busy couple of weeks full of passion and creativity, and I’ve been starting to feel more optimistic about librarianship and life, so I thought I’d quickly sneak in another GLAM blog club given the topic this month is passion.”

Thank you for your blogs this month, and sharing your passions.

Many GLAM workers are creative at work, and outside of work. Our theme for June is Create. We look forward to reading your blogs!

Please don’t forget to use the tag GLAM Blog Club in your post, and #GLAMBlogClub for any social media posts linking to it. If you haven’t done so yet, remember to register your blog at Aus GLAM Blogs. Happy blogging!


Why We Chose Ceph to Build Block Storage

Published 31 May 2018 by Anthony D'Atri in The DigitalOcean Blog.

Why We Chose Ceph to Build Block Storage

In January 2013, DigitalOcean became one of the first cloud providers to offer SSD storage. For several years, a slice of the virtualization hypervisor's local drives provided this storage available to Droplets. This approach worked great but had its limitations, such as:

For these and other reasons, we introduced Block Storage in July 2016. Since then, we’ve steadily increased capacity and have deployed into all service regions. In this post, we'll explore the underlying technology behind our Block Storage offering.

Creating Block Storage That Can Scale

In the past, portable, scalable block storage service was usually provided with a traditional SAN (Storage Area Network). These tended to be expensive and difficult to manage and upgrade. Scaling and upgrading could be difficult, and the architecture was susceptible to considerable vendor lock-in.

At DigitalOcean, we love and support open-source software. So when the time came to architect our Block Storage service, we used these guiding criteria:

The best-of-breed solution for all of these criteria is the leader in open and widely-adopted distributed storage: Ceph.

Ceph in Production

In the 15 years since Ceph began, it has steadily grown in popularity, performance, stability, scalability, and features. As GNU Lesser General Public License (LGPL) open-source software, Ceph enjoys a rich community of users and developers, including multiple DigitalOcean engineers who've contributed upstream code to the core Ceph project.

The RBD (RADOS Block Device) service provided by Ceph slots right into the popular KVM QEMU virtualization technology we employ. Droplets enjoy flexible block storage that is presented just like a local drive.

Our Ceph-backed Block Storage service is also 100% SSD-based. Ceph is built for redundancy, and we carefully ensure that the loss of a single drive, server, or even an entire data center rack does not compromise data integrity or availability.

Ceph gracefully heals itself when individual components fail, ensuring continuity of service with uncompromised data protection. Additionally, we use sophisticated monitoring systems built around tools including Icinga, Prometheus, and our own open-source ceph_exporter. These help us respond immediately to any issues with our Ceph infrastructure to ensure continuous availability.

Our Block Storage deployment into each new Droplet region brings hundreds of enterprise-class SSDs managed by the Luminous release of Ceph. We keep three copies of your data to ensure the highest data durability and availability. These replicas are carefully distributed across separate servers and racks to eliminate any single point of failure.

Each Ceph cluster's performance and utilization is carefully monitored so that we can add additional resources as needed. Ceph's flexibility allows us to expand existing storage clusters or even add new ones to a region completely transparently. We are also able to upgrade Ceph and complete other types of fleet-wide maintenance in a rolling fashion, without downtime or other impacts to our valued customers.

It is important to note however that this replication is entirely behind-the-scenes. It prevents us losing your Block Storage volume data, but does not protect your Droplet itself, nor does it allow recovery from accidental deletion on your end. Thus, backups of critical data are still important. See these articles for help on Block Storage volume snapshots and data backups:

And if you haven’t already, create your own Block Storage volume on DigitalOcean.

Anthony D’Atri is a veteran sysadmin who's been working with Ceph for four years, starting with the Dumpling release. He is the co-author, along with Vaibhav Bhembre, of Learning Ceph, which outlines architecting, deploying, and managing Ceph at scale. He enjoys photography and a never ending quest for exotic fruit. He lives in Portland, Oregon with his wife and son.


PHPWeekly May 31st 2018

Published 31 May 2018 by in PHP Weekly Archive Feed.

PHPWeekly May 31st 2018
Curated news all about PHP.  Here's the latest edition
PHP Weekly 31st May 2018
Hello and welcome to the latest @phpweeklynews.
 
This week the PHP development team has released PHP 7.2.6 and 7.1.18, both available immediately.
 
Also we take a look at Part 2 of the Building a PHP Framework series, looking in greater detail at what web frameworks are and what they do.
 
We have an article that looks to answer the question which is best; PHP or Node.js?
 
And finally, after a couple of months break the That Podcast team makes a welcome return, this week featuring Shawn McCool discussing his open source project Event Sourcery.
 
Have a great weekend,

Cheers
Ade and Katie

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.

Articles

Is Node.js killing PHP?
In a complex world of programming languages, it is often hard to understand what you really need. In fact, it may lead to holy wars around the question which one is the best. In this article, we are going to address one of such burning arguments – PHP vs Node.js. Their comparison seems necessary, as they both act in the same field, mostly aimed at the web development. Moreover, they are both open source platforms and are often applied for the same web solutions. So, let us see the battle of Node.js vs PHP and define the winner!

What I Make of Adobe’s Magento Acquisition
When Adobe announced that they were going to buy Magento it was with a little bit of envy that I heard the news. During my time at Zend I did a lot of work with the folks at Adobe and I can truly say that it was one of the highlights of my time at Zend.

Invest in Promote Drupal. Get A Special Bonus
The Promote Drupal Initiative is your opportunity to make Drupal - and your business - known and loved by new decision makers. Donate to the Promote Drupal Fund today. Help us help you grow your business. Together, let's show the world just how amazing Drupal is for organisations.

The 52 Best Tools for Freelancers to Scale a Business
People choose to freelance for a number of reasons—but most often, it is because they desire freedom. But, freelancing is not all rainbows and smiles. Check out these tools to make your job easier.

Tutorials and Talks

How I Built The LaravelQuiz Chatbot With BotMan and Laravel
Ever wondered how well you know your favourite PHP framework Laravel? Give the LaravelQuiz Chatbot a try, and you will find out. This article is a step by step guide on how I created this chatbot with BotMan and Laravel.
 
How to Test Private Services in Symfony
Two versions of Symfony are affected by this dissonance between services and tests. Do you use Symfony 3.4 or 4.0? Do you want to test your services, but struggle to get them in a clean way? Today we look at possible solutions.
 
A Package That Makes Event Sourcing in Laravel a Breeze
In most applications you store the state of the application in the database. If something needs to be changed you simply update values in a table. When using event sourcing you'll take a different approach. All changes to application state are stored as a series of events. The key benefit here is that you now have a history of your database.
 
Adding an Auto-Generated Sitemap to Your Jigsaw-based Static Site
I love Tighten's static site generator, Jigsaw. I've tried a few other static site generators, and (of course, I'm biased) I think Jigsaw has the best combination of power and simplicity. Plus, it feels like I'm writing Laravel code - because, essentially, I am.
 
Simple Horizontal Scrolling Menu in Just CSS
I recently visited a site with a horizontally scrolling sub-menu (pictured below) which I really liked. Because of the stigma of horizontal scrolling on desktop fostered from the non-responsive days, I often immediately dismiss it as bad practice but actually I found this pattern to be very usable on my phone.
 
Scheduling MySQL Backups with Laravel
You can export your whole database by running one line in your command line. It’s accessible and useful. But it’s a bit wiser to you automate the entire process. Let’s see how!
 
When and Where to Determine the ID of an Entity
This is a question that always pops up during my workshops: when and where to determine the ID of an entity? There are different answers, no best answer. Well, there are two best answers, but they apply to two different situations.
 
An Introduction to Mongo DB
MongoDB is an open-source, document-oriented, NoSQL database program. If you’ve been involved with the traditional, relational databases for long, the idea of a document-oriented, NoSQL database might indeed sound peculiar. “How can a database not have tables?”, you might wonder. This tutorial introduces you to some of the basic concepts of MongoDB and should help you get started even if you have very limited experience with a database management system.
 
Understanding Design Patterns - Iterator
Provides a way to access the elements of an aggregate object sequentially without exposing its underlying representation.
 
Working with Mutable and Immutable DateTime in PHP
Mutable dates can be the source of confusion and unexpected bugs in your code. My goal isn’t to tell you that DateTime is evil because it’s mutable, but to consider the tradeoffs and benefits of using mutable versus immutable DateTime objects. Either approach warrants a good test suite and an awareness of how modifier methods affect your date objects.

News and Announcements

PHP 7.2.6 Released
The PHP development team announces the immediate availability of PHP 7.2.6. This is a primarily a bugfix release which includes a memory corruption fix for EXIF. PHP 7.2 users are encouraged to upgrade to this version.

PHP 7.1.18 Released
The PHP development team announces the immediate availability of PHP 7.1.18. All PHP 7.1 users are encouraged to upgrade to this version.

Symfony 4.1.0 Released
Symfony 4.1.0 has just been released, with a list of the most important changes.

Mid-Atlantic Developer Conference - July 13-14th 2018, Baltimore
Mid-Atlantic Dev Con is a polyglot event, designed to bring together programmers from the region, regardless of their choice of platform, for two full days of learning from each other and building a stronger regional community. Tickets are on sale now.

Laracon EU - 29-31st August 2018, Amsterdam
Laracon EU is a unique international Laravel event with over 750 attendees. The conference has multiple tracks and is focusing on in-depth technical talks. Come learn about the state of the industry while networking with like-minded and diversely experienced developers. Tickets are on sale now.

ZendCon - 15-17th October 2018, Las Vegas
ZendCon & OpenEnterprise is the premier technology conference designed to teach and share practical experiences from the front lines of enterprise PHP and open source environments. Focused on solving real-world, enterprise-class problems, technical business leaders, strategists, and developers will assemble to discuss case studies and best practices around the application of PHP and open source to transform business. The Call for Papers ends TODAY, and Blind Bird tickets are on sale now.

Podcasts

This week Cal Evans interviews Karen Baker, founder of ShipperHQ and WebShopApps.
 
That Podcast Episode 50: The One Where We Talk to Shawn about Event Sourcery, CQRS, Event Sourcing and GDPR
In this episode, Dave and Beau talk to Shawn McCool about his experiences with CQRS and Event Sourcing, the GDPR, and his recently revealed open source project event sourcery. Event sourcery is a PHP CQRS/ES library with a core principle of keeping it simple, while providing some more advanced technical capabilities, like keeping personal data out of the immutable event streams.
 
Full Stack Radio Podcast Episode 89: Sam Selikoff - Choosing Ember.js in 2018
In this episode, Adam talks to Sam Selikoff about how Ember fits into the JS framework landscape in 2018, and why it might be the right choice for your next project.
 
MageTalk Magento Podcast #170 - “Level Up” Live at Imagine 2018
Phillip and Kalen recap the first day and a half of Imagine 2018, and Kalen remarks how the entire Magento community is being asked to "level up" with a call to higher standards and higher goals from Magento CEO, Mark Lavelle. Listen now!
 
PHP Roundtable Podcast Episode 71: Extra! Extra! PHP 7.2 Released!
The next major version of PHP is here! PHP 7.2 comes with a nice set of upgrades, performance enhancements, and a brand new crypto library right out of the box. We discuss some of the features and breaking changes that we should be aware of before upgrading to PHP 7.2.
 
Laravel News Podcast LN63: The latest Laravel Releases, Editors, Package Development, and Community Packages
Jake and Michael discuss all the latest Laravel releases, tutorials, and happenings in the community.
 
The Laracasts Snippets Episode 84: Basic Financial Literacy
In the United States (and surely many other countries), financial literacy is not taught in schools. You might think that basic investing and a review of compound interest would be profoundly important learning material. But according to the school board, you'd be wrong. Perhaps it's only natural then that those living in the US are deeper in debt than ever in our history.
 
Topics include how a “location API” allows cops to figure out where we all are in real time, and introducing Visual Studio Live Share.
 
Our hosts, Eric van Johnson and John Congdon review Treasure, Old & New which is the May 2018 issue of php[architect] magazine. Share your thoughts on the topics covered and leave a comment below.
 
Post Status Draft Podcast - The History of the Web, and WordPress’s 15th Birthday
In this episode, Brian is joined by Jay Hoffmann — the owner and curator of The History of the Web, a timeline and history of the web — and they discuss the project, as well as WordPress’s 15 year arc of history.

Reading and Viewing

What Are The WordPress PHP Coding Standards?
In this video from my course, Learn PHP for WordPress, you'll learn all about the coding standards and why they're important.
 
Building a PHP Framework: Part 2 – What is a Web Framework?
Part 1 of this series detailed why I have this crazy idea to build a PHP framework. In this post I’ll be discussing what web frameworks are, what they do, and give some initial ideas for Analyze.
 
Cloudways Interview - Success Story of Adam Stone, CEO of Freelancing Startup Speedlancer
Freelancing startup Speedlancer hires qualified freelance designers, writers, and data entry professionals that businesses could employ on a freelance basis. The startup helps businesses save time that was previously wasted on bidding and hiring freelancers so that the jobs gets done as quickly as possible. We spoke with Adam on his journey of becoming a serial entrepreneur and his breakthrough success of Speedlancer in the freelancing industry.
 
This article is Part 2 of my review of the book "Fifty quick ideas to improve your tests". I'll continue to share some of my personal highlights with you.

Jobs





Do you have a position that you would like to fill? PHP Weekly is ideal for targeting developers and the cost is only $50/week for an advert.  Please let me know if you are interested by emailing me at katie@phpweekly.com

Interesting Projects, Tools and Libraries

errorheromodule
A Hero for your ZF2, ZF3, and ZF Expressive application to log ( DB and Mail ) and handle php errors & exceptions during Mvc process/between request and response.
 
question2answer
Question2Answer is a free and open source platform for Q&A sites, running on PHP/MySQL.
 
url-shortener
A small set of PHP scripts that will help you in shortening your url. 
 
symfonyinstaller
This is the official installer to start new projects based on the Symfony full-stack framework. The installer is only compatible with Symfony 2 and 3.
 
nukeviet
NukeViet CMS is multi Content Management System, the 1st open source content management system in Vietnam.
 
migration
Simple library writen in PHP without framework dependancy for database version control. Supports Sqlite, MySql, Sql Server and Postgres.
 
php-binance-api
PHP Binance API is an asynchronous PHP library for the Binance API designed to be easy to use.
 
typed
Improvements to PHP's type system in userland: generics, typed lists, tuples and structs.
 
ci-phpunit-test
An easier way to use PHPUnit with CodeIgniter 3.x.
 
ide-stubs
This repo provides the most complete Phalcon Framework stubs which enables auto completion in modern IDEs.
 
skosmos
Skosmos is a web-based tool providing services for accessing controlled vocabularies, which are used by indexers describing documents and searchers looking for suitable keywords.
 
laravel-cookie-consent
Make your Laravel app comply with the crazy EU cookie law.
 
shopware
Shopware 5 is the next generation of open source e-commerce software made in Germany.
 
phpdish
PHPDish is a powerful forum system written in PHP. It is based on the Symfony PHP Framework.

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.
 

So, how did you like this issue?

Like us on FacebookFollow us on Twitter
We are still trying to grow our list. If you find PHP Weekly useful please tweet about us! Thanks.
Also, if you have a site or blog related to PHP then please link through to our site.

unsubscribe from this list | update subscription preferences 
 
Copyright © 2018 PHP Weekly, All rights reserved.
Email Marketing Powered by MailChimp

Verifiable Claims and Distributed Identifiers at W3C

Published 30 May 2018 by Liam Quin in W3C Blog.

It’s always exciting to write about great work going on at W3C with potential to have a huge impact on humanity. One of the use cases in the Verifiable Claims work is to give stateless refugees a way to identify themselves safely. Other use cases in education, in government, in banking,  have potential to change the way business is done on the Web. So what’s this stuff all about?

Really, the Verifiable Claims Working Group is developing a framework for signing and verifying credentials, such as, this person has a valid driving license and I am this person; or, I’m over eighteen years of age but I don’t want to tell you my exact age; or, I’m a legal resident of such-and-such a region and have a right to enroll for this university course.

The model is fairly simple: you use an independent third party to hold “identity wallet” that contains your credentials. I think of this as a can-do box: you put things in it that you can do, and, when you tell them to, the third party releases a copy of a credential to another organization. So you go to rent a car and you instruct the can-do box to show the car rental company your driving licence.

Now, how does this preserve any privacy? First,, if you want, you don’t actually have to use a third party to hold your can-do box: you can keep it on your own computer or on the cloud. You can also have as many different can-do boxes in as many places as you like. In addition, privacy legislation such as Europe’s GDPR applies to all personal data and the penalties for violating the law are heavy. In the future, it’s possible we’ll also see encrypted credentials.

If a third party is holding your credentials, how do you tell it which one to show the car rental firm or the border immigration officer? This is where distributed identifiers have a role to play: you tell the can-do box to release the credential with a specific identifier. The organization keeping that box doens’t need to know who you are, nor anything about the recipient, nor why you’re releasing a copy of the credential. Just, send credential 137 to so-and-so from box 9015.  You are in control of what gets shared and with whom.

The  distributed identifiers draft (and the first draft of verifiable claims, or verifiable credentials) came out of the W3C Credentials Community Group, which continues active work today alongside the W3C Verifiable Claims Working Group.

What about trust? Well, the credentials are digitally signed by the issuer, so for example the Belgian government can confirm that a particular credential is one that they issued, and maybe supply a certificate to go with it, something that can also be shown to a human.

And because the framework is built on top of the Blockchain distributed resolution model (entirely separately from bitcoin of course), you can revoke credentials at any time without needing a complex public key infrastructure beyond what the underlying blockchain protocols already provide.

Verifiable credentials are generally exchanged today using a JSON-LD syntax, although there may also be an XML syntax in the future. There are implementations, building on top of platforms such as hyperledger,  so although the work is not yet a W3C Candidate Recommendation, it’s already solving problems.  Government departments are considering these technologies for driving licences and for digital IDs for people around the world. The third-party verifiable credential model is also being used for delivery of educational content to just the right people, and more applications are emerging.

Why not join the work and share some of our excitement?


Mediawiki AuthManager and SessionManager SSO

Published 30 May 2018 by Andy Johnson in Newest questions tagged mediawiki - Stack Overflow.

I am currently using 1.24.x and using LoginForm class and FauxRequest to login the remote (and create it locally if it doesn't exist) but this feature is being removed in 1.27.x so I am forced to write with a new standard using AuthManager and SessionMamager. I also will be upgrading to 1.31 as soon as LTS version of it comes out. While reading, AuthManager and SessionManager, I just can't understand how can I authenticate external users. I also looked at the extension pluggableSSO which uses PluggableAuth but can't understand it as well. Can someone please point me to a straightforward example of how can I authenticate a user if I have a user id and user name? and if that user doesn't exist, how can I create one and authenticate them locally?

Thanks


Create custom endpoint with custom route tutorial

Published 30 May 2018 by user1292656 in Newest questions tagged mediawiki - Stack Overflow.

Is there any way to create my custom endpoint on MediaWiki, for example:

http://my.wikiexample.com/custom/deleteAccount

I did some search but I found info only for APIsandbox and REST API, which does not give any info to create custom endpoints.


How to access a table in mediawiki

Published 29 May 2018 by Matthew Oujiri in Newest questions tagged mediawiki - Stack Overflow.

Right now I am using the mediawiki api and requests module to attempt to pull certain information from a sort of table off of a wikipedia page. As an example, we will use the song Zombie where there is a 'table' on the right where it tells me the album, the author, the release date and so forth. The only issue I'm running into is that I don't know how to query this data as I'm using this link as the endpoint: https://en.wikipedia.org/w/api.php?format=json&formatversion=2&action=query&titles=Zombie_(song)&prop=extracts to attempt to search for what I need but it brings up the text on the page. I've tried the sandbox and I've had issues trying to find what would give me the information I need. I appreciate any advice and input, thanks.


Episode 9: Kunal Mehta

Published 29 May 2018 by Yaron Koren in Between the Brackets: a MediaWiki Podcast.

Kunal Mehta (also known as "legoktm") is a developer at the Wikimedia Foundation in the MediaWiki Platform team. He has been involved in MediaWiki development since 2010.

Links for some of the topics discussed:


Encoding issue when navigate anywhere

Published 29 May 2018 by user1292656 in Newest questions tagged mediawiki - Stack Overflow.

did anyone met the following issue on mediawiki? . Seems that request header has the correct encoding UTF-8 . See the image

enter image description here


"Who will have our backs now?"

Published 29 May 2018 by in New Humanist Articles and Posts.

In Pakistan, accusations of blasphemy can lead to death at the hands of the state – or the mob. A few brave lawyers are fighting back.

More time for living, less for living longer

Published 29 May 2018 by in New Humanist Articles and Posts.

Barbara Ehrenreich asks if our obsession with “wellness” and keeping fit is really all that healthy.

Introducing CoverMe: find the most called MediaWiki code lacking test coverage

Published 29 May 2018 by legoktm in The Lego Mirror.

CoverMe, hosted on Wikimedia Toolforge

Test coverage is a useful metric, but it can be difficult to figure out exactly where to start. That's where CoverMe is useful - it sorts functions by how often they're called on Wikimedia production servers, and then displays their coverage status.

CoverMe

Try it out! You can filter by Git repository and entry point (index.php, load.php, etc.). So if you look at the api.php entry point, you'll see mostly API related code. If I look at the Linter extension, I can see that the RecordLintJob::run is well covered, while ApiRecordLint::run is not covered at all. If some extensions simply aren't called that frequently, there might not be any function call data at all.

The function call data comes from the daily Xenon logs that are used for profiling FlameGraphs, and the CI test coverage data. CoverMe fetches updated data on the hour if it's available.

The source code is published on Phabricator and licensed under the AGPL v3, or any later version.


cardiParty 2018-06 with Sarah Murphy

Published 28 May 2018 by Andrew Kelly in newCardigan.

Saturday 9 June, 1pm at East Perth Cemeteries.

Find out more...


Updating to a much higher mediawiki script?

Published 25 May 2018 by Erik L in Newest questions tagged mediawiki - Stack Overflow.

I currently have 1.23.1 installed and would like to update to ´1.30´ and got bunch of extensions installed. What would be the best way to update the wiki? Should I update version by version or jump straight the the final version that I want?


Extract titles of pages in category using the WikipediR package in R

Published 25 May 2018 by Sophie in Newest questions tagged mediawiki - Stack Overflow.

Using the "WikipediR" package in R, I would like to extract all category members of the category "Person_der_Reformation" on Wikipedia:

library(WikipediR)

# Retrieve all pages in the "Person_der_Reformation" category on de.wiki
persons <- pages_in_category("de", "wikipedia", categories = "Person_der_Reformation", limit = 500)
persons

My result seems to be a threefold nested list (persons > query > categorymembers). But how do I further process this? In the end, I want to get a list of all personal names in order to built a list including all the Wikipedia articles I want to scrape for building a corpus for Text Mining in R. Does anybody has a clue on this?

My alternative idea was to read the xml document resulting from the API call https://de.wikipedia.org/w/api.php?action=query&list=categorymembers&cmtitle=Category:Person_der_Reformation&cmlimit=500&format=xml

but there I'm struggling with the XPath to adress the title attributes. This is how the result page from the API call looks like:

<api batchcomplete="">
<query>
<categorymembers>
<cm pageid="2720179" ns="0" title="Jacobus Acontius"/>
<cm pageid="1347785" ns="0" title="Sebastian Aitinger"/>
<cm pageid="7892887" ns="0" title="Martial Alba"/>
<cm pageid="2960360" ns="0" title="Albrecht (Nassau-Weilburg)"/>
.....
</categorymembers>
</query>
</api> 

GDPR is Here, and We've Got You Covered

Published 25 May 2018 by DigitalOcean in The DigitalOcean Blog.

GDPR is Here, and We've Got You Covered

Today, the new European General Data Protection Regulation (GDPR) goes into effect. (You might have received a few emails about it.) There are a lot of moving parts, but it’s an important step in protecting the fundamental right of privacy for European citizens, and it also raises the bar for data protection, security, and compliance in the industry. This post is here to guide you to our GDPR-related resources.

We’ve created a new GDPR section on our website to go over what GDPR means for you and the steps we’ve taken to ensure the protection of your privacy. In this section, you’ll find:

In addition, we updated our Privacy Policy and Terms of Service Agreement to comply with the new requirements of GDPR. If you’re interested in seeing what changed in the Privacy Policy and TOS, check out our GitHub repo where you can compare versions.

We take this new regulation seriously, and we want to get you back to doing what you love—building great software.


Mediawiki 1.27.4 jquery not loaded

Published 24 May 2018 by Andy Johnson in Newest questions tagged mediawiki - Stack Overflow.

I am new to the Mediawiki and Resourceloader stuff. I recently downloaded MediaWiki 1.27.4 LTS version and When installed, I found that even if they say, jquery is loaded by default, it is nowhere to be found (I am looking into sources tab in chrome developer tools). In one of my extensions which uses BeforePageDisplay hook, I wanted to use jquery.cookie so I declared the following resourceloader

$wgResourceModules['ext.myFirstExtension'] = array(           
        'dependencies' => array( 'jquery.cookie'),            
        'localBasePath' => dirname( __FILE__ ),            
        'remoteExtPath' => 'myFirstExtension',
        'position' => 'top'
);

And in my extension file, I am autoloading one of the class and in which, I am executing the In the script, I am simply executing the following code, and it throws me typical error of $ undefined since jquery is not loaded.

$(document).ready(function(){
alert("here");
});

And yes, I am using a Vector skin without any modifications. In addition, I am not using any other extensions except VisualEditor and it works beautifully fine.

I also tried mw.loader.load('jquery') in my and it also complains that mw is not recognized.

I also added $wgResourceLoaderDebug = true; in my localsettings so that resource loader doesn't bundle up my scripts and css

I suspect that Mediawiki internally can't function without jquery.. but now how can I get jquery to load in my extension correctly so that I can use jquery.cookie.

Thanks


GDPR is here!

Published 24 May 2018 by Bron Gondwana in FastMail blog.

The European Union and the United Kingdom have been leaders in writing regulations to protect something we've long known you value -- your personal information and privacy. We talked about the basics of GDPR protection last month; now it's time to talk about what's changing.

For us, it's been an opportunity to make sure that our practices are in line with our values.

For FastMail, not much is changing. We have high standards for ourselves, and you don't have to change much if you aren't monetizing customers' personal data! Where we've spent the bulk of our time (besides converting our policies from code into words) is thinking about areas where being helpful comes into tension with privacy.

Being helpful vs. protecting your privacy

We pride ourselves on solving unusual problems like buggy mail client behavior, and helping customers out of tough situations (even when that tough situation is something like "my aged parent forgot to pay for their account for two years.") It feels great to go above and beyond for customers! But this process made us think about what kind of personal data might be collected incidentally in the logs we use for debugging, or how long a reasonable person might expect that their information is retained if they choose not to pay for an account.

Reducing our data retention periods, especially in the case where the retained data was likely to contain personal customer information, was one of our biggest changes. We've tried to strike the right balance between making sure you still get the support you expect from us, and protecting your personal information.

You've got rights - know how to use them

We know our new privacy policy is longer. We went with one that sacrificed brevity for coverage, but we hope it has retained clarity and comprehension.

Due to our commitment to open standards, it's always been possible to get your personal data from us in a downloadable, machine-readable format. The privacy policy now includes much more specific language detailing the laws under which those rights are granted - but at FastMail, everyone has them, not just European residents.

What's a DPA, and do I need one?

One of GDPR's other major goals is to try to keep companies from passing the buck in the case of a breach of personal information. As such, corporations that process data on behalf of other people need a contract with all the vendors they use who might hold that information. That contract is a Data Protection Addendum. If you're an individual, you get your services directly from us, and you don't need a DPA.

If you're a corporation, and you do need a DPA, it depends which product you're using, for:

What's next?

This is not our last revision to our Terms of Service and Privacy Policy. Protecting your data is not something we need a law to push us to do! It did push us to formally name Privacy Officers (who you can contact at privacy@fastmailteam.com). They are staffers who are receiving additional
training on security and privacy considerations, and are explicitly empowered to question decisions we're making in all our products to make sure we're always making good choices around your privacy.

Our revised documents and new related resources:

If you have further questions about GDPR, your data, or your privacy rights, feel free to reach out to our support team for assistance. Thank you for using FastMail!


PHPWeekly May 25th 2018

Published 24 May 2018 by in PHP Weekly Archive Feed.

PHPWeekly May 25th 2018
Curated news all about PHP.  Here's the latest edition
PHP Weekly 24th May 2018
Hello to the PHP community, and welcome to PHPweekly.com.
 
After several weeks hiatus the PHP Round Table podcast makes a return in this weeks edition, discussing all things Wordpress.

Also this week we take a look at changes made to Homestead to simplify setting it up to serve Apigility, Expressive and Zend Framework projects.

If you want to add user email confirmation to your Laravel projects, we've details on a new package to do just that.

Plus this week Joomla 3.8.8 has been released, addressing 9 security vulnerabilities and over 50 bug fixes.

And finally, Symfony Live returns to London in September over two days. The Call for Papers is open now.

Have a great weekend,

Cheers
Ade and Katie

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.

Articles

20 Best WPBakery Page Builder Addons & Extensions of 2018
With over a million users worldwide, the popular drag-and-drop page builder, WPBakery Page Builder (formerly Visual Composer), has inspired a legion of developers to create loads of cool and useful addons and extensions that increase its functions and provide virtually unlimited possibilities when it comes to creating the WordPress site of your dreams. Here’re a list of the 20 best of these addons and extensions for 2018 to be found at CodeCanyon.

Investing In the Promote Drupal Fund
The Promote Drupal Initiative is your opportunity to make Drupal - and your business - known and loved by new decision makers. Led by the Drupal Association, we will work with the Drupal business community to hone Drupal’s messaging and create the promotional materials we can all use to amplify the power of Drupal in the marketplace.
 
The Do’s and Don’ts for Hosting WordPress Membership Sites
When it comes to WordPress sites, not all of them can be treated the same in terms of what works best for performance. A simple five-page WordPress site behaves completely different than say a large WooCommerce site (which can be very demanding). WordPress membership and community sites are another type that falls into what we sometimes call this “tricky” category. Today we’ll explore some of the do’s and don’ts for WordPress membership sites and how to best optimise them to ensure optimal performance, scalability, and longevity.
 
PHP Allows For The Design of X
Starting complicated twitter conversations should be avoided, I know this, and yet blurted one out on twitter recently ... This was met with a flurry of responses and I couldn't reasonably reply in tweet form. I'm going to respond to some of those tweets (indirectly) and further explain my original tweet.

Tutorials and Talks

Adding WordPress Admin Notices With Your WordPress Plugin
When you develop a plugin sometimes you need a communication channel with the users that installed the plugin, one easy way to achieve this is showing a WordPress Admin notification on top of pages.

How to Load --config With Services in Symfony Console
In the first post about PHP CLI Apps I wrote about poor DI support in PHP CLI projects. Today we look on the first barrier that leads most people to prefer static over DI - how to load config with services.

Running SQLite in PHP with Docker
SQLite is a great database for getting started on small projects. Unlike traditional SQL databases (like MySQL or Postgres), SQLite stores all your records in a single flat file that you can easily edit, transfer, or even check into version control (if your project warrants it). Another great feature of SQLite is that it’s built into the default PHP images on Docker Hub, so you don’t even have to start up another Docker container, and running a PHP application with a SQLite database is essentially a one-liner. Let’s take a look at how you can incorporate SQLite into your Dockerised PHP apps.

Weird Operators in PHP
If you read the PHP documentation, you will learn about a ton of operators. If you haven’t learnt about PHP operators, go do that first, we’ll wait for you.

Developing Laravel Packages with Local Composer Dependencies
Developing Composer packages locally through a local file symlink speeds up development immensely when you want to create Laravel packages and try them out on a real application. I was reading about a fancy bash alias by Caleb Porzio, which is a bash alias inspired by npm link.

Zend Framework/Homestead Integration
Last year, we wrote about using Laravel Homestead with ZF projects. Today, we contributed some changes to Homestead to simplify setting it up to serve Apigility, Expressive, and Zend Framework projects.

Moving A WordPress Root Install To A Subdirectory Install And Vice Versa
Tell me if this sounds familiar: You pick up a new client who wants you to develop a new theme for their site so you set up a development site and get to work. A few days go by and you’ve laid the groundwork of the new theme and you think it’s about time to pull in their current site’s content so that you can get into the specifics. You set up WP Migrate DB Pro on both sites, pull the database and then realise that your development site was a subdirectory install while the client’s live site was a standard root-directory install and your dev site is totally messed up.

When Empty Is Not Empty
Recently when I was working on a project I got some strange results when using the empty function. Here's what I was debugging. I've simplified the code a bit to share with you.

New in Symfony 4.1: Hidden Services
In Symfony 3.4 we made all Symfony services private by default. This is generally better and makes applications more robust (as explained in the previous post) but it also has some drawbacks.

Add User Email Confirmation to Your Laravel Projects
If you want to add an email verification step to user registration in your Laravel Projects, Marcel Pociot has a new package aptly named laravel-confirm-email. New users are required to confirm their registration through an email to proceed.

Five Useful Laravel Blade Directives
We’re going to look at five Laravel Blade directives you can use to simplify your templates, and learn about some convenient directives that make solving specific problems a cinch! If you’re new to the framework, these tips will help you discover the excellent features of Blade, Laravel’s templating engine.
News and Announcements

Joomla 3.8.8 Release
Joomla 3.8.8 is now available. This is a security release which addresses 9 security vulnerabilities, contains over 50 bug fixes, and includes various security related improvements.

Atlas.Query: Simple. Sensible. SQL.
I am happy to announce that Atlas.Query is now stable and ready for production use! Installaton is as easy as composer require atlas/query ~1.0.

Statamic 2.9 is Now Released
Statamic is a flat file CMS built on Laravel and Vue.js and they’ve just launched v2.9 that includes a host of new features, enhancements, and improvements to enhance the developer experience.

International PHP Conference - June 4-8th 2018, Berlin
The International PHP Conference is the world’s first PHP conference and stands since more than a decade for top-notch pragmatic expertise in PHP and web technologies. Internationally renowned experts from the PHP industry meet up with PHP users and developers from large and small companies. Here is the place where concepts emerge and ideas are born - the IPC signifies knowledge transfer at highest level. All delegates of the International PHP Conference have, in addition to PHP program, free access to the entire range of the webinale taking place at the same time. Tickets are on sale now.

Dutch PHP Conference - June 7-9th 2018, Amsterdam
Ibuildings is proud to organise the eleventh Dutch PHP Conference on June 8th and 9th, plus a pre-conference tutorial day on June 7. Both programs will be completely in English so the only Dutch thing about it is the location. Keywords for these days: Know-how, Technology, Best Practices, Networking, Tips & Tricks. The target audience for this conference are PHP and Mobile Web Developers of all levels, software architects, and even managers. Beginners will find many talks aimed at helping them become better developers, while more experienced developers will come away inspired to do even better and with knowledge about the latest tools and methodologies. Tickets are on sale now.

WavePHP Conference - September 19th-21st 2018, San Diego
WavePHP Conference is bringing the wonderful PHP community to the Southwest United States. Designed to be a conference for both professionals and hobbyists alike. Held in beautiful southern California's San Diego County the area has ideal weather and tons of activities. Early Bird Tickets are on sale now.

Northeast PHP Conference - 19th-21st September 2018, Boston
Our event is a community conference intended for networking and collaboration in the developer community. While grounded in PHP, the conference is not just about PHP. Talks on web technology, user experience, and IT management help PHP developers broaden their skill sets. Early Bird Tickets are on sale now.

Symfony Live - September 27-28th 2018, London
Symfony is proud to organise the 7th edition of the British Symfony conference and to welcome the Symfony community from all over the UK. Join us for 2 days of Symfony to share best practices, experience, knowledge, make new contacts and hear the latest developments with the framework! The Call for Papers is now open, and Early Bird Tickets are on sale now.

Podcasts

Voices of the ElePHPant - Interview with Margaret Staples
Cal Evans sits down with Margaret Staples of Twilio to talk community.
 
MageTalk Magento Podcast #169 - An Evening with Eric Hileman
Eric Hileman from MageMojo and Mojo Stratus fame join us to get down and dirty in the nitties and the gritties about what makes scaling on AWS so difficult, and how running Magento in the cloud takes more expertise now than ever before. Listen now!

PHP Roundtable Podcast Episode 70: All Things WordPress
We chat about backwards compatibility, Gutenberg, and the WordPress ecosystem. 

PHP Ugly Podcast #105: Exposed Source
This weeks topics include stack overflow for teams and who controls glibc?
 
Post Status Draft Podcast - Making WordPress and WordSesh
In this episode, Brian and Brian discuss the upcoming WordSesh schedule and go spelunking through make.wordpress.org to surface some recent gems making their way to WordPress.org – both the project and the website.

Laravel Podcast Episode 12 - Interview: Samantha Geitz
Interview with Samantha Geitz, Senior Developer at Tighten.

Reading and Viewing

Book Review: Fifty Quick Ideas to Improve Your Tests - Part 1
"Fifty Quick Ideas to Improve Your Tests" is written by Gojko Adzic, David Evans, Tom Roden and Nikola Korac. After being asked if he knew about good resources on how to write good acceptance test scenarios, Matthias Noback read and subsequently reviewed this book.

Cloudways Interview - Success Story Of Evan Wong, CEO Of Australian Fintech Startup Checkbox
The fintech industry, which aims to apply digital innovations to existing financial modes of operations in order to improve the quality of financial services, is growing immensely in popularity thanks to the efforts of several startup incubators down under. Checkbox, the brainchild of CEO Evan Wong, is a standout venture that digitises complex regulations into automated cloud software without requiring developer coding. In this post, we highlight the success story of Evan Wong and his Checkbox startup.
 
Building a PHP Framework: Part 1 – Why? Seriously, Why?
There are a tremendous amount of great PHP frameworks. Off the top of my head I can think of several. Yet, for every Laravel there are probably five lesser-known, high quality frameworks. So with all of that being said, it begs the question: why on Earth would you want to do this?

First-timer at DrupalCon: The Non-Tech Girl Experience
There’s the book about givers and takers that I’m reading right now. Adam Grant, the author, claims the good will conquer the evil, a poor girl will find her prince, the givers will triumph over the takers and exchangers. That sounds too sweet, but Adam wins me step by step with his statistics and researchers. Here’s my little feedback on what I’ve seen during 4 days at DrupalCon Nashville: as a volunteer and a grant receiver, as an attendee and a Drupal team member. The little feedback on the Community of givers. 

Jobs





Do you have a position that you would like to fill? PHP Weekly is ideal for targeting developers and the cost is only $50/week for an advert.  Please let me know if you are interested by emailing me at katie@phpweekly.com

Interesting Projects, Tools and Libraries

msphpsql
The Microsoft Drivers for PHP for Microsoft SQL Server are PHP extensions that allow for the reading and writing of SQL Server data from within PHP scripts.

easy-digital-downloads
Selling digital downloads is something that not a single one of the large WordPress ecommerce plugins has ever gotten really right. This plugin aims to fix that. 

ethereum-php
PHP interface to Ethereum JSON-RPC API. Fully typed Web3 for PHP 7.X.

lifterlms
LifterLMS, the #1 WordPress LMS solution, makes it easy to create, sell, and protect engaging online courses.

matamo
Matomo is the leading open alternative to Google Analytics that gives you full control over your data.

sourcebans-pp
Admin, ban, and comms management system for the Source engine.

tus-php
A pure PHP server and client for the tus resumable upload protocol v1.0.0.

boinc
Open-source software for volunteer computing and grid computing.
 
security
The Security component provides a complete security system for your web application.

piWallet
A popular secure opensource online altcoin wallet that works with practically any altcoin. piWallet uses PHP, mySQL, JavaScript and Bootstrap meaning it's quite simple to setup.

router
A barebones router for PHP. Automatic get variable based on handler function parameter list. Support to compile router callback handlers into plain array source code.

luya
The Yii 2 wrapper to build beautiful and easy editable websites pretty fast! 

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.
 

So, how did you like this issue?

Like us on FacebookFollow us on Twitter
We are still trying to grow our list. If you find PHP Weekly useful please tweet about us! Thanks.
Also, if you have a site or blog related to PHP then please link through to our site.

unsubscribe from this list | update subscription preferences 
 
Copyright © 2018 PHP Weekly, All rights reserved.
Email Marketing Powered by MailChimp

Perspectives on Payments Canada Summit 2018

Published 23 May 2018 by Karen Myers in W3C Blog.

At the Payments Canada Summit, 9-11 May 2018 in Toronto, the historically conservative Canadian Financial Industry heard some consistent themes from many of the conference speakers: technology change is happening now and will continue to accelerate, so adapt quickly or be left behind. And data is the new currency.

The World Wide Web Consortium’s (W3C) Web Payments lead Ian Jacobs was invited to moderate one of the first panels titled, “Streamlined Checkout on the Web,” together with Shopify’s Andre Lyver, Director of Engineering, and Google’s Anthony Vallee-Dubois, Software Developer, who are active contributors to the technical work.

Andre Lyver, Anthony Valle Dubois and Ian Jacobs on stage at Payments Canada

Andre Lyver, Anthony Valle Dubois and Ian Jacobs speaking at Payments Canada Summit 2018

The trio presented an overview of the W3C’s royalty-free standards work to make online checkout faster and more secure on the Web, and showed demos of implementations by Shopify and in the Google Chrome browser that are working today. At the W3C table in the exhibit hall, Jacobs and Lyver demonstrated how using the simplified “buy button”, and reusing browser-stored data, enables the completion of shopping transactions more quickly and securely.

Lyver presented some very early findings based on Shopify’s experimentation with the W3C’s Payment Request API, including findings of reduced checkout times through the browser interface, and popularity amongst shoppers around surfaced discount codes. Coupons, discount codes, and loyalty programs are under discussion in the W3C’s Web Payments Working Group and Commerce Interest Group.

Following the W3C panel session on Web Payments, the opening day keynote presentations previewed technology changes happening today, or arriving in the very near future. Stacey Madge, Country Manager and President at Visa Canada, envisioned a “wave a connectivity of everything” where Visa will embed credentials into all types of connected devices. Madge referenced a partnership between Visa and Honda to develop APIs for pay-at-the-pump and parking location and payment scenarios. W3C’s Automotive Web Payments Task Force is currently looking at how the Web m to these same use cases.

MasterCard’s Jessica Turner, EVP Digital Payments and Labs, emphasized EMVCo’s secure approach to payments using tokenization technology and echoed the coming ubiquity of IoT payments across devices.

Asheesh Birla, SVP of Product at Ripple, explained how Blockchain technology is solving problems such as close cross-border payments and smart contracts that will help to reduce costs and help to build “the Internet of Value.” Ripple is building community around the related Interledger protocol in the W3C Interledger Payments Community Group.

Ulku Rowe, Technical Director Financial Services at Google, stressed the need for financial institutions to create a culture of innovation to keep pace with today’s environments of cloud computing, machine learning modules, and data analytics. Rowe said the old model of ‘big versus small’ is no longer relevant; it’s whether you are ‘fast or slow’ and can accelerate the transformation of financial services companies to become technology companies.

From a more personal perspective, Frank W. Abagnale, Jr., cybersecurity, fraud and identify theft prevention consultant and author (Catch me if you can), addressed attendees via videoconference on the final day of the summit. Abagnale offered pragmatic tips for companies and individuals to be more responsible for instilling rigorous security practices.

To protect personal identities, Abagnale advised: avoid paying with checks whenever possible because of the personal and bank account information printed on them; use confetti rather than strip shredders for any paper documents, even direct mail flyers, that have any personal information; use credit monitoring services and follow-up immediately on any anomalies; use credit cards rather than debit cards for purchases because the liability protections are better.

In closing comments, main stage host Bruce Croxon, recognized in Canada for his success as an entrepreneur, venture capitalist for startups, and media personality on CBC-TV’s “Dragons’ Den” and now BNN’s “The Disruptors,” encouraged fintech entrepreneurs to create solutions to real problems that have the potential for large market impact.

Payments Canada CEO Jan Pilbauer’s closing keynote painted a future of many innovations for the financial services and payments industry, including embedded payment systems as part of connected devices in home, work and outdoor environments.

Jan Pilbauer

Jan Pilbauer, Payments Canada CEO at closing keynote, Payments Canada Summit 2018

Payments Canada, a W3C member organization headquartered in Ottawa, Ontario, ensures that financial transactions in Canada are carried out safely and securely each day. The organization underpins the Canadian’s financial system and economy by owning and operating Canada’s payment clearing and settlement infrastructure, including associated systems, bylaws, rules and standards. The value of payments cleared by Payments Canada’s systems in 2017 was approximately $50 trillion or $200 billion every business day. These encompass a wide range of payments made by Canadians and businesses involving inter-bank transactions, including those made with debit cards, pre-authorized debits, direct deposits, bill payments, wire payments and cheques.

Payments Canada is currently undergoing a multi-year modernization initiative  based on a comprehensive roadmap for policy, process and technological improvements for all ecosystem participants.


"People cannot say they did not know what was happening"

Published 23 May 2018 by in New Humanist Articles and Posts.

Q&A with journalist and author Rania Abouzeid.

Cannot upload mp4 file to my mediawiki site

Published 23 May 2018 by feiffy in Newest questions tagged mediawiki - Stack Overflow.

can not upload mp4 file to my mediawiki site.

when i upload mp4 file, it show this error:

error msg

have searched google for the error msg "Exception caught: No specifications provided to ArchivedFile constructor", found nothing useful.

i have enabled upload and allow mp4 filetype, this is my LocalSettings :

$wgEnableUploads = true;
...
$wgFileExtensions = array_merge( $wgFileExtensions,
    array( 'mp4')
);

cardiParty 2018.06 - Melbourne Birthday cardiParty!

Published 22 May 2018 by Hugh Rundle in newCardigan.

6:30pm, Friday 8 June at the Upper Terrace room, Duke of Wellington Hotel.

Find out more...


A Message About Intel’s Latest Security Findings

Published 21 May 2018 by Josh Feinblum in The DigitalOcean Blog.

In response to Intel’s statement today regarding new vulnerabilities, we wanted to share all the information we have to date with our customers and community.

Current information does not suggest that this latest vulnerability, Variant 4, would allow Droplets to gain access to the host hypervisor, or access to other Droplets. We also do not believe that we will need to reboot our entire fleet of hypervisors, as was necessary to mitigate impact from the initial Spectre and Meltdown vulnerabilities. However, there is a remote potential for exploit and we are working with Intel to validate microcode to patch for the vulnerabilities. We are accelerating the fix, but applying these updates takes coordination and time.

Our security and engineering teams are monitoring our hypervisors and following this issue closely. We remain in communication with our contacts at Intel regarding any new developments. The security of our users’ data is one of our highest priorities, and we are ready to take action if and when appropriate. At this time, we strongly recommend ensuring that you have the latest packages from your distributions, and you use the latest browser versions with fixes for Variant 4.

We will update this blog as more information becomes available. In addition to posting here, we will notify customers directly if there is a need to take action.


May Community Doers: Open Source Contributors

Published 21 May 2018 by Daniel Zaltsman in The DigitalOcean Blog.

May Community Doers: Open Source Contributors

Since DigitalOcean came to be, the founders believed that the developer community is far greater than the sum of its parts. Six years later we continue to learn and grow thanks to the tireless work of our global community. Instrumental to increasing collaboration and ease-of-use, the Projects section of the Community received its first submission four years ago and today boasts a total of 186 apps, wrappers, and integrations using the DigitalOcean API.

In this month’s “Doers” spotlight, we highlight three builders who continue to maintain technology that makes a difference for users in the DigitalOcean ecosystem. When they are not working on software engineering and DevOps, they give back in a way that enriches the community. Please join us in recognizing May’s featured Doers:

Jeevanandam M. (@myjeevablog)

When he is not building out and supporting aah, the secure, flexible, and rapid Go web framework, Jeeva has been making valuable contributions that enable developers to use DigitalOcean. Since early 2014, he has maintained a widely used DigitalOcean API client library written in Java. The client is used by the Jenkins DigitalOcean plugin, powering a large quantity of CI use cases on top of DigitalOcean. We are immensely thankful for Jeeva’s commitment to quality and community and believe this recognition is long due.

Lorenzo Setale (@koalalorenzo)

Lorenzo is a Copenhagen-based Italian developer of ideas who has been involved in the community since 2012. Anyone who has spun up Droplets using the python-digitalocean Python library will be familiar with tireless Lorenzo’s work. He has long authored and maintained one of the most used and best supported DigitalOcean API libraries. A playground for experimentation for some is a tool to build someone’s first project, thanks to Lorenzo for the technology that keeps on giving.

Peter Souter (@petersouter)

Peter is an open source citizen that leads by example, noting on his blog that “as long as people are interested I will keep maintaining and helping with open source software I maintain.” with regards to his work on Tugboat, a CLI that predates doctl. Previously at Puppet, Peter currently works at HashiCorp out of London and we’re proud to say he's been around our community for a long time. In addition to being the main contributor to tugboat, he's had a few contributions to droplet_kit, the Ruby API client. Thanks for all your work, we appreciate it all.

Jeeva, Lorenzo, and Peter showcase the qualities we are proud to see in our community and we hope that they inspire others as well. We’re grateful to have this opportunity to recognize our amazing community contributors and if you’re interested in getting more involved in the DigitalOcean community, here are a few places to start:

Want to recognize someone in the community? Leave their name in the comments or reach out to Doers [at] DigitalOcean [.] com.


"Computers aren’t capable of using common sense"

Published 21 May 2018 by in New Humanist Articles and Posts.

Q&A with journalist and software developer Meredith Broussard.

From Another View in Geraldton

Published 21 May 2018 by carinamm in State Library of Western Australia Blog.

The From Another View project team visited Geraldton, opened a pop-up exhibition at the Museum of Geraldton and conducted a Storylines session at the Geraldton Regional Library.

Looking_MuseumGeraldton

Pop up exhibition at Museum of Geraldton (c) State Library of Western Australia, 2018

At the opening of the exhibition, Pop Robert Ronan welcomed audience members to Southern Yamaji country, the land of the Nhanhagardi, Wilunyu and Amangu. Robert reminisced about life in Geraldton, and as a younger man sitting near the John Forrest statue on the foreshore. Robert recollected wondering about what it might be like for the expedition party to travel his country.

dscf2163.jpg

Museum of Geraldton (c) State Library of Western Australia 2018

Members of the Museums of Geraldton Site Advisory committee, and the Walkaway Station Museum attended. In later life, Lady Forrest (Margaret Elvire Hammersley), John Forrest’s wife lived in Georgina near Walkaway. Some of Lady Forrest’s belongings were donated to the Walkaway Station Museum.

The project team helped a number of families reconnect with photographs of family during the two day visit. Here are some of the stories.

FredMallard

Fred Mallard and Con Kelly and some of the children camped at Galena. Taken at Galena on 2nd October, 1937, at about 6 p.m. by F.I. Bray, D.C.N.A. (Deputy Commissioner [Dept. of] Native Affairs. https://storylines.slwa.wa.gov.au/archive-store/view/6/1403

Charlie Cameron

Mr & Mrs Charlie Cameron at Cue. Photograph taken on 30/9/37 by F.I. Bray, D.C.N.A. https://storylines.slwa.wa.gov.au/archive-store/view/6/1408

During the Storylines session, Trudi Cornish from the Geraldton Regional Library explained that the story of the woman in the photograph is known, however her name is not. The woman was a contemporary of King Billy and ‘gave as good as she got’ when people would mock her with the name ‘Ugly Legs’ due to some scars she had.

Photograph of “Ugly Legs”, Geraldton 1900 https://storylines.slwa.wa.gov.au/archive-store/view/6/9854)

The project team is packed up and ready for the onward journey to Wiluna to conduct a Storylines session and pop-up exhibition on Thursday 24 May 2018 at Tjukurba Art Gallery. The team will then head out to Martu, Birriliburu country along the Canning Stock Route and Gunbarrel Highway to the Mangkili Claypans, with two groups of traditional owners.

Onward

(c) State Library of Western Australia, 2018

Artist Bill Gannon will stop at Pia Wadjarri and visit the school, to discuss his artwork and John Forrest’s trek. Then he will travel to Wiluna via Mt Gould.

Looking at the map. Museum of Geraldton exhibition. (c) State Library of Western Australia, 2018


cardiCast episode 31 Reece Harley

Published 18 May 2018 by Justine in newCardigan.

Perth February 2018 cardiParty

Recorded live

The Museum of Perth chronicles the social, cultural, political and architectural history of Perth. Their exhibition space serves as a meeting place of ideas and stories, a retail space, micro-cinema and a cultural hub in a forgotten part of the city.

For our March Perth cardiParty, Reece Harley, Executive Director and founder, gave an introductory talk about Museum of Perth, covering background info about the museum and the current exhibition.

The Museum is an initiative of the Perth History Association Inc, a not-for-profit organisation founded in 2015.

newcardigan.org
glamblogs.newcardigan.org

Music by Professor Kliq ‘Work at night’ Movements EP.
Sourced from Free Music Archive under a Creative Commons licence.

 


UK Archivematica meeting at Westminster School

Published 18 May 2018 by Jenny Mitcham in Digital Archiving at the University of York.

Yesterday the UK Archivematica user group meeting was held in the historic location of Westminster School in central London.

A pretty impressive location for a meeting!
(credit: Elizabeth Wells)


In the morning once fuelled with tea, coffee and biscuits we set about talking about our infrastructures and workflows. It was great to hear from a range of institutions and how Archivematica fits into the bigger picture for them. One of the points that lots of attendees made was that progress can be slow. Many of us were slightly frustrated that we aren't making faster progress in establishing our preservation infrastructures but I think it was a comfort to know that we were not alone in this!

I kicked things off by showing a couple of diagrams of our proposed and developing workflows at the University of York. Firstly illustrating our infrastructure for preserving and providing access to research data and secondly looking at our hypothetical workflow for born digital content that comes to the Borthwick Institute.

Now our AtoM upgrade is complete and that Archivematica 1.7 has been released, I am hoping that colleagues can set up a test instance of AtoM talking to Archivematica that I can start to play with. In a parallel strand, I am encouraging colleagues to consider and document access requirements for digital content. This will be invaluable when thinking about what sort of experience we are trying to implement for our users. The decision is yet to be made around whether AtoM and Archivematica will meet our needs on their own or whether additional functionality is needed through an integration with Fedora and Samvera (the software on which our digital library runs)...but that decision will come once we better understand what we are trying to achieve and what the solutions offer.

Elizabeth Wells from Westminster School talked about the different types of digital content that she would like Archivematica to handle and different workflows that may be required depending on whether it is born digital or digitised content, whether a hybrid or fully digital archive and whether it has been catalogued or not. She is using Archivematica alongside AtoM and considers that her primary problems are not technical but revolve around metadata and cataloguing. We had some interesting discussion around how we would provide access to digital content through AtoM if the archive hadn't been catalogued.

Anna McNally from the University of Westminster reminded us that information about how they are using Archivematica is already well described in a webinar that is now available on YouTube: Work in Progress: reflections on our first year of digital preservation. They are using the PERPETUA service from Arkivum and they use an automated upload folder in NextCloud to move digital content into Archivematica. They are in the process of migrating from CALM to AtoM to provide access to their digital content. One of the key selling points of AtoM for them is it's support for different languages and character sets.

Chris Grygiel from the University of Leeds showed us some infrastructure diagrams and explained that this is still very much a work in progress. Alongside Archivematica, he is using BitCurator to help appraise the content and EPrints and EMU for access.

Rachel MacGregor from Lancaster University updated us on work with Archivematica at Lancaster. They have been investigating both Archivematica and Preservica as part of the Jisc Research Data Shared Service pilot. The system that they use has to be integrated in some way with PURE for research data management.

After lunch in the dining hall (yes it did feel a bit like being back at school),
Rachel MacGregor (shouting to be heard over the sound of the bells at Westminster) kicked off the afternoon with a presentation about DMAonline. This tool, originally created as part of the Jisc Research Data Spring project, is under further development as part of the Jisc Research Data Shared Service pilot.

It provides reporting functionality for a range of systems in use for research data management including Archivematica. Archivematica itself does not come with advanced reporting functionality - it is focused on the primary task of creating an archival information package (AIP).

The tool (once in production) could be used by anyone regardless of whether they are part of the Jisc Shared Service or not. Rachel also stressed that it is modular - though it can gather data from a whole range of systems, it could also work just with Archivematica if that is the only system you are interested in reporting on.

An important part of developing a tool like this is to ensure that communication is clear - if you don’t adequately communicate to the developers what you want it to do, you won’t get what you want. With that in mind, Rachel has been working collaboratively to establish clear reporting requirements for preservation. She talked us through these requirements and asked for feedback. They are also available online for people to comment on:


Sean Rippington from the University of St Andrews talked us through some testing he has carried out, looking at how files in SharePoint could be handled by Archivematica. St Andrews are one of the pilot organisations for the Jisc Research Data Shared Service, and they are also interested in the preservation of their corporate records. There doesn’t seem to be much information out there about how SharePoint and Archivematica might work together, so it was really useful to hear about Sean’s work.

He showed us inside a sample SharePoint export file (a .cmp file). It consisted of various office documents (the documents that had been put into SharePoint) and other metadata files. The office documents themselves had lost much of their original metadata - they had been renamed with a consecutive number and given a .DAT file extension. The date last modified had changed to the date of export from SharePoint. However, all was not lost, a manifest file was included in the export and contained lots of valuable metadata, including the last modified date, the filename, the file extension and the name of the person who created file and last modified it.

Sean tried putting the .cmp file through Archivematica to see what happens. He found that Archivematica correctly identified the MS Office files (regardless of change of file extension) but obviously the correct (original) metadata was not associated with the files. This continued to be stored in the associated manifest file. This has potential for confusing future users of the digital archive - the metadata gives useful context to the files and if hidden in a separate manifest file it may not be discovered.

Another approach he took was to use the information in the manifest file to rename the files and assign them with their correct file extensions before pushing them into Archivematica. This might be a better solution in that the files that will be served up in the dissemination information package (DIP) will be named correctly and be easier for users to locate and understand. However, this was a manual process and probably not scalable unless it could be automated in some way.

He ended with lots of questions and would be very glad to hear from anyone who has done further work in this area.

Hrafn Malmquist from the University of Edinburgh talked about his use of Archivematica’s appraisal tab and described a specfic use case for Archivematica which had specific requirements. The records of the University court have been deposited as born digital since 2007 and need to be preserved and made accessible with full text searching to aid retrieval. This has been achieved using a combination of Archivematica and DSpace and by adding a package.csv file containing appropriate metadata that can be understood by DSpace.

Laura Giles from the University of Hull described ongoing work to establish a digital archive infrastructure for the Hull City of Culture archive. They had an appetite for open source and prior experience with Archivematica so they were keen to use this solution, but they did not have the in-house resource to implement it. Hull are now working with CoSector at the University of London to plan and establish a digital preservation solution that works alongside their existing repository (Fedora and Samvera) and archives management system (CALM). Once this is in place they hope to use similar principles for other preservation use cases at Hull.

We then had time for a quick tour of Westminster School archives followed by more biscuits before Sarah Romkey from Artefactual Systems joined us remotely to update us on the recent new Archivematica release and future plans. The group is considering taking her up on her suggestion to provide some more detailed and focused feedback on the appraisal tab within Archivematica - perhaps a task for one of our future meetings.

Talking of future meetings ...we have agreed that the next UK Archivematica meeting will be held at the University of Warwick at some point in the autumn.


UK Archivematica meeting at Westminster School

Published 18 May 2018 by Jenny Mitcham in Digital Archiving at the University of York.

Yesterday the UK Archivematica user group meeting was held in the historic location of Westminster School in central London.

A pretty impressive location for a meeting!
(credit: Elizabeth Wells)


In the morning once fuelled with tea, coffee and biscuits we set about talking about our infrastructures and workflows. It was great to hear from a range of institutions and how Archivematica fits into the bigger picture for them. One of the points that lots of attendees made was that progress can be slow. Many of us were slightly frustrated that we aren't making faster progress in establishing our preservation infrastructures but I think it was a comfort to know that we were not alone in this!

I kicked things off by showing a couple of diagrams of our proposed and developing workflows at the University of York. Firstly illustrating our infrastructure for preserving and providing access to research data and secondly looking at our hypothetical workflow for born digital content that comes to the Borthwick Institute.

Now our AtoM upgrade is complete and that Archivematica 1.7 has been released, I am hoping that colleagues can set up a test instance of AtoM talking to Archivematica that I can start to play with. In a parallel strand, I am encouraging colleagues to consider and document access requirements for digital content. This will be invaluable when thinking about what sort of experience we are trying to implement for our users. The decision is yet to be made around whether AtoM and Archivematica will meet our needs on their own or whether additional functionality is needed through an integration with Fedora and Samvera (the software on which our digital library runs)...but that decision will come once we better understand what we are trying to achieve and what the solutions offer.

Elizabeth Wells from Westminster School talked about the different types of digital content that she would like Archivematica to handle and different workflows that may be required depending on whether it is born digital or digitised content, whether a hybrid or fully digital archive and whether it has been catalogued or not. She is using Archivematica alongside AtoM and considers that her primary problems are not technical but revolve around metadata and cataloguing. We had some interesting discussion around how we would provide access to digital content through AtoM if the archive hadn't been catalogued.

Anna McNally from the University of Westminster reminded us that information about how they are using Archivematica is already well described in a webinar that is now available on YouTube: Work in Progress: reflections on our first year of digital preservation. They are using the PERPETUA service from Arkivum and they use an automated upload folder in NextCloud to move digital content into Archivematica. They are in the process of migrating from CALM to AtoM to provide access to their digital content. One of the key selling points of AtoM for them is it's support for different languages and character sets.

Chris Grygiel from the University of Leeds showed us some infrastructure diagrams and explained that this is still very much a work in progress. Alongside Archivematica, he is using BitCurator to help appraise the content and EPrints and EMU for access.

Rachel MacGregor from Lancaster University updated us on work with Archivematica at Lancaster. They have been investigating both Archivematica and Preservica as part of the Jisc Research Data Shared Service pilot. The system that they use has to be integrated in some way with PURE for research data management.

After lunch in the dining hall (yes it did feel a bit like being back at school),
Rachel MacGregor (shouting to be heard over the sound of the bells at Westminster) kicked off the afternoon with a presentation about DMAonline. This tool, originally created as part of the Jisc Research Data Spring project, is under further development as part of the Jisc Research Data Shared Service pilot.

It provides reporting functionality for a range of systems in use for research data management including Archivematica. Archivematica itself does not come with advanced reporting functionality - it is focused on the primary task of creating an archival information package (AIP).

The tool (once in production) could be used by anyone regardless of whether they are part of the Jisc Shared Service or not. Rachel also stressed that it is modular - though it can gather data from a whole range of systems, it could also work just with Archivematica if that is the only system you are interested in reporting on.

An important part of developing a tool like this is to ensure that communication is clear - if you don’t adequately communicate to the developers what you want it to do, you won’t get what you want. With that in mind, Rachel has been working collaboratively to establish clear reporting requirements for preservation. She talked us through these requirements and asked for feedback. They are also available online for people to comment on:


Sean Rippington from the University of St Andrews talked us through some testing he has carried out, looking at how files in SharePoint could be handled by Archivematica. St Andrews are one of the pilot organisations for the Jisc Research Data Shared Service, and they are also interested in the preservation of their corporate records. There doesn’t seem to be much information out there about how SharePoint and Archivematica might work together, so it was really useful to hear about Sean’s work.

He showed us inside a sample SharePoint export file (a .cmp file). It consisted of various office documents (the documents that had been put into SharePoint) and other metadata files. The office documents themselves had lost much of their original metadata - they had been renamed with a consecutive number and given a .DAT file extension. The date last modified had changed to the date of export from SharePoint. However, all was not lost, a manifest file was included in the export and contained lots of valuable metadata, including the last modified date, the filename, the file extension and the name of the person who created file and last modified it.

Sean tried putting the .cmp file through Archivematica to see what happens. He found that Archivematica correctly identified the MS Office files (regardless of change of file extension) but obviously the correct (original) metadata was not associated with the files. This continued to be stored in the associated manifest file. This has potential for confusing future users of the digital archive - the metadata gives useful context to the files and if hidden in a separate manifest file it may not be discovered.

Another approach he took was to use the information in the manifest file to rename the files and assign them with their correct file extensions before pushing them into Archivematica. This might be a better solution in that the files that will be served up in the dissemination information package (DIP) will be named correctly and be easier for users to locate and understand. However, this was a manual process and probably not scalable unless it could be automated in some way.

He ended with lots of questions and would be very glad to hear from anyone who has done further work in this area.

Hrafn Malmquist from the University of Edinburgh talked about his use of Archivematica’s appraisal tab and described a specfic use case for Archivematica which had specific requirements. The records of the University court have been deposited as born digital since 2007 and need to be preserved and made accessible with full text searching to aid retrieval. This has been achieved using a combination of Archivematica and DSpace and by adding a package.csv file containing appropriate metadata that can be understood by DSpace.

Laura Giles from the University of Hull described ongoing work to establish a digital archive infrastructure for the Hull City of Culture archive. They had an appetite for open source and prior experience with Archivematica so they were keen to use this solution, but they did not have the in-house resource to implement it. Hull are now working with CoSector at the University of London to plan and establish a digital preservation solution that works alongside their existing repository (Fedora and Samvera) and archives management system (CALM). Once this is in place they hope to use similar principles for other preservation use cases at Hull.

We then had time for a quick tour of Westminster School archives followed by more biscuits before Sarah Romkey from Artefactual Systems joined us remotely to update us on the recent new Archivematica release and future plans. The group is considering taking her up on her suggestion to provide some more detailed and focused feedback on the appraisal tab within Archivematica - perhaps a task for one of our future meetings.

Talking of future meetings ...we have agreed that the next UK Archivematica meeting will be held at the University of Warwick at some point in the autumn.


Forrest’s Exploration Diaries now online

Published 17 May 2018 by carinamm in State Library of Western Australia Blog.

Artist Bill Gannon and surveyor Rod Schlenker, visited the State Library to see the original diaries of John and Alexander Forrest’s 1874 expedition from Geraldton to Adelaide. The diaries, which are held in the State Library collections, are now accessible online through the catalogue.(ACC 1241A)

073_Forrest Diaries_16-5-18.jpg

From Another View Project Coordinator Tui Raven with Rod Schlenker and Bill Gannon as they look at the diaries. (C) State Library of Western Australia, 2018. 

This week Bill Gannon and a team from the State Library will embark on a on a trip to engage with Aboriginal communities and visit key locations along the 1874 trek route.  This artistic and community engagement is part of the ‘From Another View’ project, a collaboration between the State Library and Minderoo Foundation.  The project considers the trek ‘from another view’, or rather from many views, incorporating various creative and Aboriginal community perspectives.

Explore some of the camp locations referenced in John and Alexander Forrest’s diaries through the Google map.

056_Forrest Diaries_16-5-18.jpg

Forrest’s Expedition to Central Australia, State Library of Western Australia, ACC 1241A

For more information about the From Another View project go to: https://fromanotherview.blog/  Follow the From Another View blog to keep updated with the project.

 


PHPWeekly May 17th 2018

Published 17 May 2018 by in PHP Weekly Archive Feed.

PHPWeekly May 17th 2018
Curated news all about PHP.  Here's the latest edition
PHP Weekly 17th May 2018
Here we are again PHP fans, with your latest edition of phpweekly.com.
 
This week we take a look at creating a custom settings panel in WooCommerce.
 
We also have Part 1 of a workflow series on deploying WordPress.
 
PHP Conference Asia has been announced, taking place in Singapore in September. Already confirmed to speak is Rasmus Lerdorf and Sebastian Bergmann. Super Early Bird tickets are on sale now.
 
Plus the latest Full Stack Radio podcast is all about Vuex, and using it to manage your applications state.
 
And finally, find out about upcoming events and releases across the WordPress project in The Month in WordPress: April 2018.

Have a great weekend, and enjoy your read.

Cheers
Ade and Katie

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.

Articles

Progress and Next Steps for Governance of the Drupal Community
One of the things I love the most about my new role as Community Liaison at the Drupal Association is being able to facilitate discussion amongst all the different parts of our Drupal Community. I have extraordinary privilege of access to bring people together and help work through difficult problems. The governance of the Drupal project has evolved along with the project itself for the last 17 years. I’m determined in 2018 to help facilitate the next steps in evolving the governance for our growing, active community.

I'm Starting a Newsletter
For the past few months, I've been looking for a new home to share articles, projects, podcasts, or other things that leave an impression on me.

Diversity Initiative: The CARE Team
Adopting a Code of Conduct was a great step forward for the Symfony community. Now, if a community member encounters an issue of harassment or other unwanted behaviour, they need to be able to report it and get support. This is one of the roles of the CARE team.

10 Best WordPress Event Management Plugins (Calendars, Ticketing, RSVPs)
If you’ve ever tried to install a calendar plugin you know that it’s not exactly the same as a fully functional event management tool. Calendars display dates of events, while the best WordPress event management plugins offer functions like ticketing, RSVPs, guest management, automated email notifications, booking forms and more. In order to achieve some of the more advanced calendar features, a WordPress event management plugin is required. What’s great is that you have dozens of options to choose from, and the best ones are affordable, powerful, and easy to understand.

Tutorials and Talks

Rectify: Turn All Action Injects to Constructor Injection in Your Symfony Application
Action Injections are much fun, but it can turn your project to legacy very fast. How to refactor out of the legacy back to constructor injection and still keep that smile on your face?

How to Create a Custom Settings Panel in WooCommerce
One of the reasons for WooCommerce's popularity is its extendability. Like WordPress itself, WooCommerce is packed full of actions and filters that developers can hook into if they want to extend WooCommerce's default functionality.

How To Send JSON Data From a Drupal 8 Site
Imagine a situation: your mobile application needs to get some information from your site on Drupal 8 using JSON. Why JSON? Why not XML? In this article you will learn how to do it without much effort and installing additional modules, how to change the JSON array programmatically, and send the JSON data with and without using Views.

Understanding Design Patterns - Template Method
Defines the skeleton of an algorithm in a method, deferring some steps to subclasses. Template Method lets subclasses redefine certain steps of an algorithm without changing the algorithms structure.

Extending WordPress WP Forms Plugin Functionality
WP Forms plugin is a great form plugin that bundles a friendly and useful visual form builder. As a developer I want to use the form builder editor to allow users be independent creating the forms but I also want to save the form submission in it’s own database structure, therefore did a little research on the plugin code and found a neat way to do this.

Running Magento 2 API Tests Via Postman
In a current Magento 2 project we are focusing on building a headless instance that communicates with a kind of PWA application. In such an environment testing the APIs via Postman makes sense and since the Magento 2 API is documented via Swagger, one can easily import the API definition into Postman. Here is how to do it with httpie.

WordPress Deployment Part 1: Preparing WordPress
Welcome to the first post in a workflow series on deploying WordPress. In this series, we’re going to look at how you can set up automated deployments for your WordPress site in a range of different ways. But before we get into the “how”, first we’re going to look at why you should consider setting up automated deployments for WordPress and how you can prepare your site for automated deployments.

Testing Your Code with Multiple Versions of PHP Using Docker
About a year ago, I spent some time working with an open source project called PHP Crud API. The project creates a RESTful API from a relational database using a single PHP script. It’s quite an impressive feat of engineering, but as I started working on the project, I realised I needed a reliable way to test my changes in different versions of PHP. That’s where Docker comes in.

How To Do PHP Continuous Integration With Travis CI
Code versioning has become a standard practice in development circles, with GitHub being a popular platform for hosting code repos. However, a common issue is the testing of the code as it is pushed by a team member. As the volume of commits increases, ensuring the quality and accuracy of code becomes a challenge.

Monitoring File Changes Using NodeJS
During script runs – that change files – I frequently need to check which files have been modified by the scripts, especially in CRON automated tasks. This allows me to take appropriate actions depending on the file state change. The following post shows how we can monitor file state changes in nodejs.

Introducing View Components in Laravel, An Alternative to View Composers
In software development, one of the “best practices” is to create reusable code that can be implemented in different parts of your application if needed.

Mocking with Anonymous Classes
PHP7 gave us some cool features, including anonymous classes. These are classes that you can define on the fly, associate with a variable and instantiate whenever you like. In a well built application you might think there are limited use cases for these, with all classes you need having their own file and specific place in the application, but what about classes that are incredibly custom, few lines long and barely used?
 
Getting Lucky With Crystal in Homestead
I’m going to open this post with an apology for anyone who’s corporate firewall gets triggered by this URL, but I just couldn’t resist the title.
News and Announcements

Atlas 3.x (“Cassini”) and PHPStorm Completion
I’m proud to announce the release of Atlas.Orm 3.0.0-beta1, along with releases of the supporting Mapper, Table, Query, Cli, and Pdo packages. (Atlas is a data-mapper for your persistence model, not your domain model, in PHP.)

php[tek] Conference - May 31st-June 1st 2018, Atlanta
php[tek] 2018 is the premier PHP conference and annual homecoming for the PHP Community. This conference will be our 13th annual, and php[architect] and One for All Events are excited to continue to host the event in Atlanta! Tickets are on sale now.

Oscon - July 16-19th 2018, Portland
OSCON is the complete convergence of the technologies transforming industries today, and the developers, engineers, and business leaders who make it happen.The 20th Open Source Convention takes place next July. From architecture and performance, to security and data, get expert full stack programming training in open source languages, tools, and techniques. Tickets are on sale now.

PHP Detroit Conference - 26-28th July 2018, Livonia
PHPDetroit is a two-day, regional PHP conference that brings the community together to learn and grow. We're preceding the conference with a 2 track tutorial day that will feature 4 sessions covering various topics. We will also be running an UnCon alongside the main tracks on Friday and Saturday, where attendees can share unscheduled talks. Tickets are on sale now.

CoderCruise - August 30-September 3rd 2018, Ft. Lauderdale, FL
Tired of the usual web technology conference scene? Want a more inclusive experience that lets you get to know your fellow attendees and make connections? Well, CoderCruise was designed to be just this. It's a polyglot developer conference on a cruise ship! This year we will be taking a 5-day, 4-night cruise out of Ft. Lauderdale, FL that includes stops at Half Moon Cay and Nassau. Tickets are on sale now.

Pan-Asian PHP Conference - September 26-29th 2018, Singapore
The third pan-Asian PHP conference will take place in September 2018 in Singapore - the Garden City of the East! This is a single track, 2 days Conference, followed by a day of tutorials on 29th September 2018. Come and meet with the fastest growing PHP communities in Asia. More than 300 attendees are expected in this single track conference, with Rasmus Lerdorf and Sebastian Bergmann already confirmed as speakers. The Call for Papers is now open, and Super Early Bird Tickets are on sale now.

Podcasts

Laravel News Podcast LN62: Caching, Bots, and Async Programming
Jake and Michael discuss all the latest Laravel releases, tutorials, and happenings in the community, which this week featured a lot of caching.

Voices of the ElePHPant - Interview with TJ Gamble
Cal Evans and TJ Gamble sit down and talk Magento, PWAs, and Imagine.

Full Stack Radio Podcast Episode 88: Blake Newman - Vue.js State Management with Vuex
In this episode, Adam talks to Blake Newman about getting started with Vuex, and how you would use it to manage your applications state using several practical real-world examples. 

MageTalk Magento Podcast #168 - “You’re Already Connected to Your Sister”
We hope you like talking about GDPR because this one is ALL. ABOUT. GDPR. Buckle up, buttercup.

PHP Ugly Podcast #104: We Lose Our Free Will
Topics include the Twitter mass password reset and how dark patterns trick you online.

Post Status Draft Podcast - The Meta Episode
In this episode, Brian and Brian discuss meta data in WordPress, including the challenge of implementing data into new tools, such as the REST API and the Gutenberg editor.

Reading and Viewing

Book Review: Discovery - Explore Behaviour Using Examples
I've just finished reading "Discovery - Explore behaviour using examples" by Gáspár Nagy and Seb Rose. It's the first in a series of books about BDD (Behavior-Driven Development). The next parts are yet to be written/published. 

Why WordPress Uses PHP
Why does WordPress use PHP? In this video from my course, Learn PHP for WordPress, you'll get a detailed answer to this question. I'll give you an introduction to what PHP is and then show you why it's used in WordPress.

The PHP Developer Stack for Building Chatbots
On July 19th 20:00 CEST, I will join a Nomad PHP meeting to talk about The PHP Developer Stack for Building Chatbots. I am super excited to present my new talk, and I want to tell you a little bit more about it.

PHP Versions Stats - 2018.1 Edition
It's stats o'clock! See 2014, 2015, 2016.1, 2016.2, 2017.1 and 2017.2 for previous similar posts.

The Month in WordPress: April 2018
This past month saw a lot of preparation for upcoming events and releases across the WordPress project. Read on to find out more about these plans, and everything else that happened around the community in April.

Jobs

German Speaking PHP Developer (m/f)
You’re proud to call yourself a nerd and consider programming in PHP to be more than just a job? You’d like to help us make our shop better and faster while simultaneously providing our customers with an unparalleled and flawless shopping experience? If you feel like this describes you and also happen to have a weakness for new technology, you’re just the person we’re looking for!




Do you have a position that you would like to fill? PHP Weekly is ideal for targeting developers and the cost is only $50/week for an advert.  Please let me know if you are interested by emailing me at katie@phpweekly.com

Interesting Projects, Tools and Libraries

psalm
A static analysis tool for finding errors in PHP applications.

subrion
A Content Management System (CMS) which allows you to build websites for any purpose. Yes, from blog to corporate mega portal.

laravel-ecommerce
AvoRed E Commerce is an Laravel Open Source Shopping Cart.

easy-digital-downloads
Sell digital downloads through WordPress.

dv-php-core
Devless is a ready-made back-end for development of web or mobile applications. 

mantisbt
Mantis Bug Tracker

bulk-delete
Bulk Delete is a WordPress Plugin that allows you to delete posts, pages and users in bulk based on different conditions and filters.

applicationinsights-php
This project extends the Application Insights API surface to support PHP.

php-invoker
Invoke PHP callables with a timeout.

aimeos-core
Aimeos PHP e-commerce framework for high performance online shops.

backdrop
Backdrop is a full-featured content management system that allows non-technical users to manage a wide variety of content. 

phpstan
PHP Static Analysis Tool - discover bugs in your code without running it!

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.
 

So, how did you like this issue?

Like us on FacebookFollow us on Twitter
We are still trying to grow our list. If you find PHP Weekly useful please tweet about us! Thanks.
Also, if you have a site or blog related to PHP then please link through to our site.

unsubscribe from this list | update subscription preferences 
 
Copyright © 2018 PHP Weekly, All rights reserved.
Email Marketing Powered by MailChimp

Block Storage Volumes Gets a Performance Burst

Published 15 May 2018 by Priya Chakravarthi in The DigitalOcean Blog.

Block Storage Volumes Gets a Performance Burst

At DigitalOcean, we’ve been rapidly adding new products and features on our mission to simplify cloud computing, and today we're happy to announce our latest enhancement.

Over the first half of 2018, we've improved performance for Block Storage Volumes with backend upgrades that reduce cluster latency by 50% and provide new burst support for higher performance for spiky workloads.

Burst Performance Characteristics

Block Storage Volumes have a wide variety of use cases, like database reads and writes as well as storing logs, static assets, backups, and more. The performance expectations from a particular volume will depend on how it's used.

Database workloads, for example, need single-digit millisecond latency. Most workloads in the cloud today are bursty, however, and don't require sustained high performance at all times. Use cases like web servers, backups, and data warehousing can require higher performance due to short increases in traffic or a temporary need for more bandwidth.

To meet the need for very low latency, we upgraded Ceph to its latest version, Luminous v12.2.2, in all regions containing Block Storage. This reduced our cluster latency by 50% and provides the infrastructure you need to manage databases with Block Storage Volumes.

To support spiky workloads, we added burst support, which automatically increases Block Storage Volumes' IOPS and bandwidth rates for short periods of time (60 seconds) before returning to baseline performance to cool off (60 seconds).

Here's a summary of the burst performance characteristics, which compares a Standard Droplet (SD) plan and an Optimized Droplet (OD) plan:

Droplet Plan
SD OD
Baseline
IOPS
(in IOPS/volume)
5000 7500
Baseline BW
(in MB/s)
200 300
Burst IOPS
(in IOPS/volume)
7500 10000
Burst BW
(in MB/s)
300 350
Avg Latency <10 ms <10 ms

We don't scale performance by the size of the volume you create, so every Block Storage Volume is configured to provide the same level of performance for your applications. However, your application needs to be written to realize these limits, and the kind of performance you get will depend on your app's configuration and a number of other parameters.

Performance and Latency Benchmarking

To learn more about the performance you're getting, we wrote How To Benchmark DigitalOcean Volumes, which explains not only how to benchmark your volumes but also how to interpret the results.

We then ran some of these tests internally to share the numbers and performance of our offering. You can find all the details in the tutorial, but here's a sample of results, which shows typical performance based on the queue depth (QD) of the application and the block size (on the x-axis) versus IOPS (on the y-axis).

Block Storage Volumes Gets a Performance Burst

Block Storage Volumes Gets a Performance Burst

These graphs show that the IOPS rate increases as queue depth increases until we hit our practical IOPS cap. Smaller block sizes tend to be IOPS limited, while larger block sizes tend to be bandwidth limited.

What about latency? Most real-world customer applications won't run the same kind of workload often used as a baseline (QD = 1 4K I/O), so these graphs show latency in µsec (or microseconds) as we add load to the cluster.

Block Storage Volumes Gets a Performance Burst

Block Storage Volumes Gets a Performance Burst

We see the same behavior in reads and writes. Because of how the backend storage stores the data, our results show that 16K has better latency at high queue depth, so we recommend you tune for 16K workloads if possible.

What's Next?

The performance improvements aren’t the only thing we have in store. There are several QoS features and infrastructure investments in the pipeline to improve your experience of Block Storage Volumes. (Ready to get started? Create a Volume now.)

We'd love to hear your thoughts, questions, and feedback. Feel free to leave a comment here or reach out to us through our UserVoice.


Episode 8: BTB Digest 1

Published 15 May 2018 by Yaron Koren in Between the Brackets: a MediaWiki Podcast.

The best of episodes 1-5! Well, not really the best, but the most relevant (and maybe interesting) parts of the first five episodes, condensed into a short(-ish) 30 minute digest.


DDD Perth Survival Tips

Published 13 May 2018 by Derek in DDD Perth - Medium.

So, you are going to DDD Perth, it’s your first time at a conference and you need some help navigating the uncharted territory. Well you have come to the right place, if you have 5 minutes to spare please take the time to read this survival guide and it will hopefully help you get the most out of the conference.

TIP 1 #earlybirdie

At conferences, registration queues can be long, the waits can be lengthy and anxiety levels are often at their maximum.

At DDD Perth we endeavour to make this wait as fun and action packed as possible. However why not simply avoid the queues altogether and turn up early. Early birds not only avoid queues, they get first dibs on any swag being offered out by sponsors (Hint: the best laptop stickers go early), get to the coffee purveyors before anyone else and so find themselves fully caffeinated and ready for the first keynote. (Note: at DDD we champion diversity we also encourage non-coffee addicts to stay ‘hydrated’ … )

TIP 2 #beprepared

Being prepared for a conference at first glance seems a bit odd — after all it’s a day off work and being prepared sounds a bit like work !! But nooooo, being prepared will enhance your experience more than 3D glasses at the latest Avenger movie. Here is a checklist to help:

  1. Get your hands on the running order Before The Event ( website the night before is your best option as it’s the most up to date )
  2. Bring a BackPack — very important for swag
  3. Put your laptop in it — if you want to break out and build something cool
  4. Don’t forget a portable charger for your phone — for mobile PUBG perhaps
  5. Bring a small snack — see #stayfuelled
  6. Bring a jumper — a hoody for that Rami Malek in ‘Mr Robot’ hacker look

TIP 3 #planyourday

“All good plans never survive first contact with the enemy” — does not mean it is not a good idea to have one. Below is a quick list you can checkoff when preparing your DDD Perth plan. They may seem like no brainers but then again most plans are (Note: not a dig at project managers)

  1. Know the Venue’s location — be familiar with how you are going to get to DDD Perth (and home after the after party)
  2. Pick your talks — it’s good to have a chosen talk and a reserve talk for a each time slot, cause life.
  3. Research your speakers — finding out about your chosen speakers adds to the talk as it gives you some possible insight into their talk.
  4. Get a good seat — be early to your chosen talks — standing for 40 min talks isn’t fun.

TIP 4 #volunteerlove
DDD is a not for profit event, it is run by people who give up their time and use personal compute cycles to make sure you have a most awesome time. Please be kind and generous with your time when dealing with them. They will be easily recognised by the multi sponsor emblazoned green t-shirts, grimacing and sweating, red faces… trust me they are having fun. Without these giant Oompa Loompas this event wouldn’t happen. ( Note: some sport luxurious beards and should be treated like any other animal at the zoo … )

TIP 5 #stayfueled

There is a lot of information to take in and digest at DDD Perth, staying fuelled throughout is important. Luckily we thought of that and the event will be fully catered, however always handy to bring a snack when your feel sugar levels heading towards dangerous (sleepy) levels.

TIP 6 #schmoozing

We are all introverted nerds who, beset with imposter syndrome don’t like venturing beyond our keyboards. This is true, but the beauty of DDD is we are ALL introverted nerds beset by imposter syndrome (even the speakers!)

So, get out there and mingle, make connections. Start conversations with complete strangers, safe in the knowledge that they feel exactly like you… It’s a pretty unique situation, be a shame to waste it. Who knows what cool stuff you might find out about, just by talking to the introverted nerd beside you — it might even be me.

tl;dr — respect your peers and have an awesome time at DDD Perth !!


DDD Perth Survival Tips was originally published in DDD Perth on Medium, where people are continuing the conversation by highlighting and responding to this story.


Untitled

Published 10 May 2018 by Sam Wilson in Sam's notebook.

It is time I think (5AM on a Friday) to finally try to get the Flicker2Piwigo CLI script working. Small job before breakfast?


Restore a corrupted mediaWiki, to a newer verison of mediaWiki

Published 10 May 2018 by G_G in Newest questions tagged mediawiki - Webmasters Stack Exchange.

A working installation of mediawiki was corrupted by a user "touching" all files under the dir structure, ending in all files having the same exact permissions, and modify dates. I'm not sure which of the above had caused the wiki to stop working, but in fact - that's what happened.

The mediaWiki is version 1.26 - currently out of support, so I know.

Every single file of the wiki is available, and the dir' structure is indeed intact. The wiki's DB is not longer available. However the images/media (if indeed stored using the DB), are not as critical to the user as the actual page text content.

Is there a way to save this wiki ? I've looked into restoring mediaWiki, but then it's assumed the wiki has been backed up properly, and this is not our case, unfortunately.

Thank you.


PHPWeekly May 10th 2018

Published 10 May 2018 by in PHP Weekly Archive Feed.

PHPWeekly May 10th 2018
Curated news all about PHP.  Here's the latest edition
PHP Weekly 10th May 2018
Welcome to the latest @phpweekly newsletter.

With the 30 Days of Testing challenge underway, second on the list was about reading and sharing E-commerce testing articles. Learn all about WooCommerce here.

Also this week, with Drupal 8 maturing and Drupal ever evolving, the Drupal Association Board continues to evolve with it. 
With this is mind there are two At-large positions on the Association Board of Directors. Self nominations from 1st-11th June, 2018, with voting taking place in July.

We have the second part of Lessons from Laracasts, a collection of tips taken from the Let's Build A Forum with Laravel and TDD tutorial. 

Plus Cal Evans interviewed Nils Aldermann and Jordi Boggiano in the latest Voices of the ElePHPant podcast.

And finally, the first Laracon Australia takes place in October, in Sydney. Speakers already confirmed include Matt Stauffer and the frameworks author Taylor Otwell. Get your early bird tickets now.

Enjoy your read, 

Cheers
Ade and Katie

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.

Articles

Testing Your E-commerce PHP Application
I'm participating (as much as possible) in the #30daysoftesting challenge organised by Ministry of Testing and SauceLabs. If you're interested, read the full 30 Days of E-Commerce Testing article and join this fun and educational challenge. The 2nd challenge on the list was to read and share interesting blog articles about E-commerce testing. Since I'm working as a PHP professional I thought it would be great if I oriented my focus on testing PHP based E-commerce platforms. I picked WooCommerce as it's an easy to install and use E-commerce solution. For Magento, PrestaShop and others I've added useful links at the bottom of this article. 

Programming = Climbing a Huge Mountain
Let's take a break after 2 long code-posts from last week and enjoy bit of philosophy. I apply the mountain climber in programming for last 2 years and it really helps me to overcome difficult spots. Today we'll climb together.

Drupal Association Board Elections 2018
Now that Drupal 8 is maturing, it is an exciting time to be on the Drupal Association Board. With Drupal always evolving, the Association must evolve with it so we can continue providing the right kind of support. And, it is the Drupal Association Board who develops the Association’s strategic direction by engaging in discussions around a number of strategic topics throughout their term. As a community member, you can be part of this important process by becoming an At-large Board Member.

A Good Issue
Maintaining a number of open source projects comes with a number of issues. Reporting a good issue will result in a more engaged approach from project maintainers. Don't forget: there's a human behind every project.

Tutorials and Talks

Understanding Design Patterns - Command
Encapsulates a request as an object, thereby letting you parameterise other objects with different requests, queue or log requests, and support undoable operations.

Querying and Eager Loading Complex Relations in Laravel
Laravel is a PHP framework that uses Eloquent, a powerful and amazing ORM that allows you to do complex SQL queries in a very easy way. But sometimes you need more, and here I’m gonna give you an interesting tip that can bring you a lot of flexibility.

Notifications in Laravel
In this article, we're going to explore the notification system in the Laravel web framework. The notification system in Laravel allows you to send notifications to users over different channels. Today, we'll discuss how you can send notifications over the mail channel.

Introducing New Symfony Polyfills for PHP 7.3 and Ctype
Symfony Polyfills provide some features from PHP core and PHP extensions implemented as PHP 5.3 code, so you can use them in your applications regardless of the PHP version being run on your system.

How to Create a PayPal Donate Button for Your WordPress Site
From non-profit organisations to churches, and political campaigns to bloggers who need early support, several situations warrant asking for donations. Several WordPress plugins are available for collecting donations, but more often than not all you need is a simple PayPal Donate button.

Sending Email Asynchronously With ReactPHP Child Processes
In PHP the most of libraries and native functions are blocking and thus they block an event-loop. For example, each time we make a database query with PDO, or check a file with file_exists() our asynchronous application is being blocked and waits. Things often become challenging when we want to integrate some synchronous code in an asynchronous application. This problem can be solved in two ways.

PHP Application Logging with Amazon CloudWatch Logs and Monolog
Logging and information debugging can be approached from a multitude of different angles. Whether you use an application framework or coding from scratch it’s always comforting to have familiar components and tools across different projects. In our examples today, I am going to enable Amazon CloudWatch Logs logging with a PHP application.

Speed Up Laravel on Top of Swoole
Swoole is a production-grade async programming framework for PHP. It is a PHP extension written in pure C language, which enables PHP developers to write high-performance, scalable, concurrent TCP, UDP, Unix socket, HTTP, WebSocket services in PHP without too much knowledge of the non-blocking I/O programming and low-level Linux kernel. You can think of Swoole as something like NodeJS but for PHP, with higher performance.

How to Install Laravel on Amazon Cloud (AWS EC2)
Laravel is a popular framework that has become the standard development toolkit for many PHP projects. In many cases, developers prefer to develop their project in Laravel because of the many features and tools that ensure streamlined development experience.

How Laravel Broadcasting Works
Today, we are going to explore the concept of broadcasting in the Laravel web framework. It allows you to send notifications to the client side when something happens on the server side. In this article, we are going to use the third-party Pusher library to send notifications to the client side.

[Entry] Appointment Scheduler
This scheduler allows you to create appointments to be scheduled in different rooms. You can create rooms, create appointments to be added directly to the scheduler, move appointments between rooms and time slots on the scheduler, schedule appointments without a time to be added later (drag and drop them on).
News and Announcements

Joomla 3.9 and Joomla 3.10
As you most probably know, the General Data Protection Regulation (GDPR) will enter into force on 25 May, 2018. Joomla, listening to its users, intends to integrate a Privacy Tool Suite in the Joomla CMS to facilitate the compliance of your sites and to make developers’ life easier to get their extensions compliant.

CakePHP Conference - June 14-17th 2018, Nashville
CakeFest is organised for developers, managers and interested newcomers alike. Bringing a world of unique skill and talent together in a celebration and learning environment around the worlds most popular PHP framework. Celebrating over eleven years of success in the PHP and web development community, CakePHP’s 2018 conference will be an event not to miss. Tickets are on sale now.

Mid-Atlantic Developer Conference - July 13-14th 2018, Baltimore
Mid-Atlantic Dev Con is a polyglot event, designed to bring together programmers from the region, regardless of their choice of platform, for two full days of learning from each other and building a stronger regional community. Early Bird tickets sales end in two days.

Laracon EU - 29-31st August 2018, Amsterdam
Laracon EU is a unique international Laravel event with over 750 attendees. The conference has multiple tracks and is focusing on in-depth technical talks. Come learn about the state of the industry while networking with like-minded and diversely experienced developers. Tickets are on sale now.

ZendCon - 15-17th October 2018, Las Vegas
ZendCon & OpenEnterprise is the premier technology conference designed to teach and share practical experiences from the front lines of enterprise PHP and open source environments. Focused on solving real-world, enterprise-class problems, technical business leaders, strategists, and developers will assemble to discuss case studies and best practices around the application of PHP and open source to transform business. The Call for papers is now open, and Blind Bird tickets are on sale now.

Laracon AU - October 18-19th 2018, Sydney
Two days of learning and networking with the Laravel community in Australia for the first time. The two day conference will see us welcome some of the most prominent Laravel community members including Matt Stauffer, Adam Wathan, and the framework’s author Taylor Otwell as speakers alongside a host of terrific local speaking talent. Early Bird Tickets are on sale now.

Nomad PHP US - June 21st 2018 20:00 CDT
Win Big, Cache Out. Presented by Ashley Hutson. Caching can be a very complicated and loaded topic in Computer Science. There are many factors to consider from query caching, results caching, SQL caching, partial content caching, and full page caching. Look forward to finding out typically when, what, and where you should be caching and the best practices in implementing and how in PHP with various caching technologies(Redis, Memcached, and cloud based solutions). Always remember that you can over cache, so it is important to not go overboard as well.

Nomad PHP EU - June 21st 2018 20:00 CEST
Solving Problems Using Trees. Presented by Tomasz Kowalczyk. The tree is one of the most important data structures available in Computer Science. If you know how to describe a problem using trees, you can significantly improve the speed and quality of the developed solution. In this talk, I’d like to show what kind of problems can be solved with trees and show examples how I did that in several non-trivial situations.

Podcasts

Voices of the ElePHPant - Interview with Nils Adermann and Jordi Boggiano
In this episode, Cal talks with Nils Adermann and Jordi Boggiano about composer and packagist.

Three Devs and a Maybe Podcast - Site Reliability Engineering with Niall Murphy
In this week’s episode we are lucky to be joined by Niall Murphy to discuss the discipline of Site Reliability Engineering.

MageTalk Magento Podcast #167 - The Left Hand of Agreement / The Right Hand of Discord
What happens when Phillip forgets his headphones? Terrible audio quality, that's what! Recorded 30 days before Imagine 2018 and never released, this episode held up production due to its never-ending issues and basically almost never saw the light of day.

The Laracasts Snippets Episode 83: Stream of Consciousness
While most episodes generally focus on one central idea, today is more a stream of consciousness. We'll discuss everything from the struggles of running a business, to Metroid, to social media addiction, to Cobra Kai. Grab a drink and let's hang out.

PHP Ugly Podcast #103: The Longhorn Peace Summit
Topics include the Coinbase Blog and photos from LonghornPHP.

Post Status Draft Podcast - All About You(r Privacy)
In this episode, the two Brians discuss the current conversations and controversy surrounding data collection and visitor privacy on the web.

Reading and Viewing

Vienna PHP Meetup – Blackfire Talk
This blog post as been written by Emir Beganović, an active community member and speaker at PHP meetups. He reached out to us for some support on making a great meetup talk about Blackfire.

Bizarro Devs
A curated newsletter with all the cool, wacky and the obscure tech news delivered on a weekly basis (you’ll get it on Tuesdays). It’s free and will help you earn the most Slack reactions in your office. See the latest issue here, featuring phpweekly.com!

Cloudways Interview - Zvonimir Burić Talks About Magento Development Workflows
Learn what Zvonimir Burić, Technical Lead & Magento Developer, has to say about the future of Magento 1 and 2, ecommerce trends, and Magento development principles and workflows in our one-to-one session.

What's New in Laravel 5.6
Laravel 5.6 is upon us! While it's true that this release isn't quite as flashy, there are still a number of incredibly useful new additions and updates. Let's review them together.

Exakat PHP Index of Coding (May 2018)
Not using @ is the poster child of good practices. It’s also looked upon, as an impossible goal. Did you know that the @ operator is only merely used by 50% of PHP applications ? Same for parenthesis with include (and co) : don’t use them, like 50% of the developpers. This is how the Exakat PHP Index of coding was born.

Lessons From Laracasts, Part 2
This is part 2 of my collection of tips I've taken from Let's Build a Forum with Laravel and TDD, the mega-tutorial (102 lessons) by Jeffrey Way on his Laracasts site. Part 1 is here. This post contains 51 tips, covering lessons 43-102.

Jobs

German Speaking PHP Developer (m/f)
You’re proud to call yourself a nerd and consider programming in PHP to be more than just a job? You’d like to help us make our shop better and faster while simultaneously providing our customers with an unparalleled and flawless shopping experience? If you feel like this describes you and also happen to have a weakness for new technology, you’re just the person we’re looking for!




Do you have a position that you would like to fill? PHP Weekly is ideal for targeting developers and the cost is only $50/week for an advert.  Please let me know if you are interested by emailing me at katie@phpweekly.com

Interesting Projects, Tools and Libraries

intl
A PHP 5.5+ internationalisation library, powered by CLDR data.

lazer-database
PHP flat-file database to store data with JSON.

php-slang
The place where PHP meets Functional Programming.

auth0-PHP
The Auth0 PHP SDK provides straight-forward and tested methods for accessing Authentication and Management API endpoints.

dephpugger
Dephpugger (read depugger) is an open source lib to make a debug in php direct in terminal, without necessary configure an IDE.

coding-standard
Slevomat Coding Standard for PHP_CodeSniffer complements Consistence Coding Standard by providing sniffs with additional checks.

zentaopms
Zentao is an agile(scrum) project management system/tool, Free Upgrade Forever.

phpegg
A multi application mode php framework, support web page, rest api, jsonrpc, grpc applications.

fortnite-php
Interact with the official Fortnite API using PHP.

jedy
Jedy CMS Multi-language is created with Symfony 3.

php-audit
PhpAudit is a tool for creating and maintaining audit tables and triggers for creating audit trails of data changes in MySQL databases.

phlex
A super-sexy voice interface for the Plex HTPC.

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.
 

So, how did you like this issue?

Like us on FacebookFollow us on Twitter
We are still trying to grow our list. If you find PHP Weekly useful please tweet about us! Thanks.
Also, if you have a site or blog related to PHP then please link through to our site.

unsubscribe from this list | update subscription preferences 
 
Copyright © 2018 PHP Weekly, All rights reserved.
Email Marketing Powered by MailChimp

Introducing Updates for Load Balancers

Published 8 May 2018 by Tyler Crandall in The DigitalOcean Blog.

Introducing Updates for Load Balancers

In February 2017, we launched Load Balancers, our highly available and managed load balancing service. Thousands of users rely on them to distribute traffic across Web and application servers.

Today, we’re announcing significant upgrades to Load Balancers, including Let's Encrypt integration and HTTP/2 support. All users now have access to these features at no additional cost and with no action required. In fact, all existing Load Balancers already have been upgraded.

Let’s Encrypt Integration

Load Balancers now support a simple method to generate, manage, and maintain SSL certificates using Let’s Encrypt.

With a couple of clicks, you can add a free Let’s Encrypt SSL certificate to your Load Balancer to secure your traffic and offload SSL processing. Certificates will automatically renew, so you don't have to worry about a thing.

Introducing Updates for Load Balancers

HTTP/2

Load Balancers now also support the HTTP/2 protocol, which is a major update to HTTP/1.x designed primarily to reduce page load time and resource usage. You can find this under the Forwarding Rules dropdown in your Load Balancer settings.

Load Balancers can additionally terminate HTTP/2 client connections to act as a gateway to HTTP/1.x applications, allowing you to take advantage of HTTP/2's performance and security improvements without upgrading your backend servers.

Keep a look out for more performance-focused announcements in the coming months.

Our improved Load Balancers are available in all regions for the same price of $20/month. For more information about Load Balancers, please check out our website and these community articles:

Happy coding,
Tyler Crandall
Product Manager


Building the best group email for teams – an interview with our COO

Published 8 May 2018 by David Gurvich in FastMail blog.

Building the best group email for teams – an interview with our COO
Building the best group email for teams – an interview with our COO

Topicbox launched in August 2017 and since then we’ve been busy creating the best group email product we can for teams, whatever that team looks like.

We recently sat down with Helen Horstmann-Allen, Topicbox COO and Head of Product to talk about the history of Topicbox, the future of email and how group email can help a wide range of teams be more productive and organized than ever before.

Helen has worked in tech and email for more than 20 years and is still in love with email today.


Firstly, tell us a bit about your background in email and tech?

Helen: I got started with Pobox, which is an email forwarding service (and now part of the FastMail family) in 1995, the world’s first lifetime email address, and having ‘one address for life’ was our initial concept.

And like many companies, we started getting feedback from customers right away asking us for more features they wish we would add. And a very early feature request was group email, colloquially known as ‘listserves’.

The most popular open source product at that point in time was a program called MajorDoMo, and so majordomo.pobox.com launched in early 1996. We had tons of people sign up for it and we very quickly decided that it actually should be its own product, Listbox, which we launched late in 1996.

Listbox is still around today, but it went through many iterations. Initially it was just people who wanted to talk to each other, over email, which was so novel back then!

Over time we expanded the service offering to include email marketing and newsletters, but my first passion was always the group email product.

I think email is a tool that everybody has access to. It is one of the only pieces of technology that is almost truly universal. It’s accessible to almost everyone and when you talk about the value of email as a discussion tool it’s incredibly inclusive.

In 2015 I sold my business (which had created Pobox and Listbox) to FastMail and when we started talking about what we could do together, group email was one of the first things we both thought sounded like a really interesting idea.

Listserves have been around for a very, very, very long time. They are one of the foundational technologies of the internet, but nobody does them really well. And we thought what a great opportunity. If we could make a great group email solution, could we change the way people use all their email?

After quite a bit of work and many iterations we launched Topicbox. Topicbox was originally built to serve people at a pretty large size and as we started working on it, and we started testing it with people, we discovered that in fact even very small groups can get a lot of benefit out of group email.

What were some of the other challenges in creating Topicbox?

Helen: Email is one of those technologies that people love to hate. There’s a lot of challenges in it and most of them have to do not with the sending of email but the receiving and the organizing.

There are always the clients, or the products or the projects that you absolutely positively need to hear everything about the moment it happens. There are other things where you just need to kind of know stuff is going on but you don’t need to be interrupted by it all day long. And then there are plenty of projects that other people are working on, you need to be aware of maybe and you might need access to it in the future but you don’t need to actually see it now.

If Topicbox could take all that information – some of which you get today and you’re frustrated with; and lots of which you don’t get today and then you don’t have that information when you need it – and put it one place that everyone in your organization can share then how much of your team’s best knowledge could we make accessible to you?

How can Topicbox help teams communicate better?

Helen: In many ways using Topicbox is just like using your regular email. The only difference is instead of sending it one-to-one, or CCing a whole group of people, you send it to your group.

The group can be predefined, either by you when you create it, or by someone else who is running the project, and that’s really it. You still send the same information but you get it to the right audience.

I use Topicbox just by myself sometimes, just for one-to-one correspondence because I know that some day, someone besides me is going to need to read it and it is somewhere they can now get at it.

We use it for groups of two or three people. Instead of some people getting CCd on some messages and not others and they have an incomplete history … a Topicbox group lets you have a complete history for everyone to see, even if they end up joining a project later.

And then of course for big groups it makes perfect sense. You don’t want to have to have your ‘All company’ messages in one place, you want to have one central tool that you use for everything.

Where does Topicbox sit amongst a suite of modern communication tools such as more traditional email, instant messaging and CRMs?

Helen: Topicbox is something that almost every company who uses email can use. If in your company today you CC people then you probably should be using Topicbox.

Imagine Topicbox as a tool to put the control and the flow of messages into the hands of the people who receive it.

Chat is terrific, but it’s kind of like a replacement for a telephone call or walking by a group. When you have a really active chat platform it can feel like reading a transcript of everything that happened in your office over the course of a day. That’s too much information for lots of people. And it’s not a great way to get oversight over an organization.

If you feel overwhelmed by the volume of chat, moving your important discussions up to Topicbox is a wonderful solution and a great add-on to those existing tools.

But if you’re using regular email in almost any way, if you ever CC somebody, a Topicbox group is going to help you retain that information in a more useful, more searchable and more categorized way. And that helps when you bring more people on, it helps when you transition people out, it helps when you start up a new project and it helps when you retire that project.

You can say all that information is now gone to one contained place and we’ll start a new group to discuss a new thing and we don’t have to have our history backlog littered with information about old projects that you get when you’re re-using a lot of tools.

What’s your favourite Topicbox feature?

Helen: I love the Daily Summary! I love getting one message that I can quickly skim through and just see what other people are working on who aren’t necessarily my department, or aren’t necessarily in my team. It’s definitely not what I am working on but it gives me a little oversight into what everybody else has got going on, and [gives me insights] if something important is happening in an area of the company that I’m not dealing with.

I also love organization-wide search. Who hasn’t found themselves in the position of knowing that something has been discussed, not necessarily knowing where to look for it? Topicbox helps you find what you are looking for and then immediately places you in the context where you can also see what else has happened around there very, very quickly.

How else has Topicbox improved your own business communication?

Helen: One of my favourite places to use it is with clients. When you are dealing with any type of external organization you may have one, two or three different touchpoints there and you may also have multiple people on your staff who need to deal with them.

Creating a Topicbox group is a really easy way to make sure that everybody is on the same page all the time.

Do you use Topicbox through the web browser, mobile or your email program?

Helen: I started out using Topicbox almost exclusively through email and as time went on I found myself more often going to the website and using the Message Composer to respond to old threads.

And what’s great about that is then I know I can just go back to my inbox and throw away everything because now I know it’s in Topicbox so I don’t have to hold onto it myself.

What is planned for Topicbox in 2018?

Helen: We’ve got some big things planned so stay tuned! I can’t share anything just yet but we’re currently looking at more ways to make Topicbox even better for teams.

We also love hearing from our customers, so if you have any feedback or feature wishes please let us know via Twitter or contact us directly.


Berlin Underground

Published 6 May 2018 by jenimcmillan in Jeni McMillan.

UBahn

It’s 3.19 am. Berlin time. I am dancing in the underground. Sweet violin plays the strings of my heart. Ride of the Valkyries. My soul in question.


Our Journey towards Diversity

Published 6 May 2018 by Rebecca Waters in DDD Perth - Medium.

DDD Perth 2017 Speakers

DDD Perth has a tagline that reads:

DDD Perth is an inclusive non-profit event for the Perth software community. Our goal is to create an approachable conference that anyone can attend or speak at, especially people that don’t normally get to attend / speak at conferences.

It’s an admirable mission statement, if I do say so myself! DDD Perth aims to do this by adhering to a few golden DDD rules, and a few more that are DDD Perth-ified:

Focussing on creating a safe and inclusive environment where everyone is welcome

The last line I’ve called out here is one I want to spend a bit of time on. In 2016, the conference was into it’s second year. The inaugural conference in 2015 went well. The founders, Matt Davies and Rob Moore might have had minor personal breakdowns and maxxed their personal credit cards to finance the venture, but on the whole they pulled it off and were ready for the challenge of a repeat performance.

So off on the conference train it went! The ticket price stayed as a low as possible. The event was on a Saturday. The Call for Presentations was broad and the voting democratic. There was a code of conduct and it was followed by all attendees.

It was also a fantastic conference. Held at the Mecure Inn, the conference ran 3 tracks, was attended by 200 people and simply was a great day. From talks on Authentication, to presenters drinking growlers on stage, the day was a success. This was the first time I managed to get to the conference and I was in awe of how enjoyable the day was.

The thing is, the safe and inclusive environment was welcoming for sure, but the agenda didn’t feature a single woman. It was something that the organisers saw, the attendees saw, the speakers saw. I’m also proud to say that in his opening address, Rob stood up and owned it. He made a pledge to do something about it for 2017.

In 2017, I joined the organising committee, as did many other talented and motivated folks.

When we came to reflect on the 2016 conference and look ahead to 2017, we set our sights on changing that.

We weren’t sure where to start, but we decided that the next step towards an inclusive conference was addressing gender diversity in speakers. There are other diversity and inclusion points to address, but as a group of volunteers, we recognised that if we were to be successful, we needed to concentrate our efforts, so we focussed on gender diversity.

As I talk about gender diversity in this post, I want to make mention of non-binary genders. In 2017, DDD Perth had no attendees or speakers that identified as anything except Female or Male. Our registration questionnaire allowed for ‘other’ genders to be included, but did not feature a free text field.

We asked our contacts in the Perth community who had experience in this area, and put that advice together with our own ideas.

We looked at two parts of forming an agenda; submissions and selection.

Submissions

When we looked at where our submissions were coming from, it was pretty evident early on that we didn’t have a big reach into the software community.

The number of submissions jumped from 48 in 2016 to 104 in 2017

Reaching Out

We reached out through our contacts to as many community groups and companies as we could. The feeling was that a bigger reach would naturally result in submissions from women.

We also spearheaded a grass roots campaign to increase diversity in submissions. We all knew of impressive women in software in Perth — who doesn’t — so we asked them to submit, and we asked them to convince their peers too as well.

Michelle Sandford, one of Microsoft’s Top Social Media Influencers worldwide, had previously mentioned how much she enjoyed the conference, and agreed to help us promote submissions from women. Fenders, with the incomparable Mandy Michael, helped spread the word. There are many others who helped promote DDD Perth in 2017.

Flexibility in Submissions

Donna Edwards, State Delivery Manager at Readify, makes a great point in her 2017 DDD Perth talk about criteria and the willingness of women to apply. She’s talking about attraction strategies in the workplace; looking back we applied similar thinking to our conference submission process. (I should have put a spoiler alert on this paragraph as to our success in attracting women speakers, huh?)

We took a long look at our conference submission criteria. We looked from the eyes of a first time presenter, from women, from minority groups in our community, and ultimately we removed what we thought were two barriers to submission: unconscious bias and length of talk.

The thought was that for a first time presenter, a 45 minute talk could be intimidating. We introduced 20 minute talk lengths to encourage people who maybe thought they didn’t have enough content for a long talk to still consider submitting.

37% of 2017 talk submissions were 20 minutes in length

Unconscious Bias is an ugly thing to think about, isn’t it? Still, we forced ourselves to think on this point. Quite possibly, the people voting recognised some names and voted for those talks based on notoriety (be that fame, unconscious gender bias, unconscious race bias…the list goes on). We recognised that this could lead to our conference getting old very quick, should we only hear from the same 10 presenters every year. We decided that we would introduce anonymous voting.

I mention this whilst talking about Submissions, because we said upfront on the submission page that all identifying information would be hidden come voting time. We felt that in order to remove the barrier, we needed to be upfront about our intent. Being transparent about our process was important to us.

21% of submissions in 2017 included a presenter who identified as female

Selection

DDD Perth, as with all DDD conferences, has a democratically chosen agenda. What this means is that anyone is able to influence the agenda during our one week voting window.

I’ve already mentioned our anonymous voting changes. We stripped identifying information such as names and pronouns, but left experience and credentials.

However, we didn’t prohibit people mentioning the title of their talk on social media. This was a point of discussion for us; would it detract from the anonymous voting? Would it unfairly advantage those people with a following already?

Look, the answer to that is Yes. There was an advantage for people who vocally encouraged their networks to vote for them. This worked for both male and female speakers. However we felt that as well as being impossible to police, the promotion of the conference was a good byproduct. It also allowed our allies, such as Michelle, to promote the submissions by women she admired.

When it came time to tally the votes, we found that the process employed had yielded positive results.

A quarter of speakers in 2017 identified as female

I also want to mention our Code of Conduct. On securing speakers, we required each person explicitly agreed to our enforceable Code of Conduct. We discussed what to do on the day if this became an issue and every volunteer knew what to do and who to contact.

Uncomfortable as confrontation is, I’m glad we spent time discussing the possibility of expelling people from the conference. It hasn’t been required yet, but that doesn’t take away from the need to be prepared.

Not there yet

As you can see in the above statistics, DDD Perth is improving, but we’re not there yet. That’s why in 2018, we have a dedicated champion, Matt Ward, to help focus our efforts, not only in gender diversity.

We’re concentrating on achieving better representation in gender, seniority, ethnicity, accessibility and role. This isn’t an exhaustive list, nor will 2018 be the perfect conference. We just hope it’s another step forward.


Our Journey towards Diversity was originally published in DDD Perth on Medium, where people are continuing the conversation by highlighting and responding to this story.


How to create sitemaps with only non-www URLs in MediaWiki website?

Published 5 May 2018 by Arnb in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I usually prefer the non-www URL for my website (http://example.com, not http://www.example.com). While submitting the site to Google one needs to set a preferred domain by choosing either the www or the non-www version of URL. Then, during submitting the sitemap, only the sitemap containing the preferred version of URL should be submitted to Google search console.

I have a website run by MediaWiki software. It uses a PHP script (named generateSitemap.php, it is bundled with the MediaWiki installation package) to create the sitemaps. One can set cron jobs to automate the process of updating sitemap at regular intervals.

The problem is my sitemap are being generated containing the www version of the webpages.

How should I instruct the programs to generate sitemaps without www in the URLs?


The anatomy of an AtoM upgrade

Published 4 May 2018 by Jenny Mitcham in Digital Archiving at the University of York.

Yesterday we went live with our new upgraded production version of AtoM.

We've been using AtoM version 2.2 since we first unveiled the Borthwick Catalogue to the world two years ago. Now we have finally taken the leap to version 2.4.

We are thrilled to benefit from some of the new features - including the clipboard, being able to search by date range and the full width treeview. Of course we are also keen to test the work we jointly sponsored last year around exposing EAD via OAI-PMH for harvesting.

But what has taken us so long you might ask?

...well, upgrading AtoM has been a new experience for us and one that has involved a lot of planning behind the scenes. The technical process of upgrading has been ably handled by our systems administrator. Much of his initial work behind the scenes has been on 'puppetising' AtoM to make it easier to manage multiple versions of AtoM going forward. In this post though I will focus on the less technical steps we have taken to manage the upgrade and the decisions we have made along the way.

Checking the admin settings

One of the first things I did when I was given a test version of 2.4 to play with was to check out all of the admin settings to see what had changed.

All of our admin settings for AtoM are documented in a spreadsheet alongside a rationale for our decisions. I wanted to take some time to understand the new settings, read the documentation and decide what would work for us.

Some of these decisions were taken to a meeting for a larger group of staff to discuss. I've got a good sense of how we use AtoM but I am not really an AtoM user so it was important that others were involved in the decision making.

Most decisions were relatively straightforward and uncontroversial but the one that we spent most time on was deciding whether or not to change the slugs...

Slugs

In AtoM, the 'slug' is the last element of the url for each individual record within the catalogue - it has to be unique so that all the urls go to the right place. In previous versions of AtoM the slugs were automatically generated from the title of each record. This led to some interesting and varied urls.


Slugs are therefore hard to predict ...and it is not always possible to look at a slug and know which archive it refers to.

This possibly doesn't matter, but could become an issue for us in the future should we wish to carry out more automated data manipulation or system integrations.

AtoM 2.4 now allows you to choose which fields your slugs are generated from. We have decided that it would be better if ours were generated from the identifier of the record rather than the title. The reason being that identifiers are generally quite short and sweet and of course should be unique (though we recently realised that this isn't enforced in AtoM).

But of course this is not a decision that can be taken lightly. Our catalogue has been live for 2 years now and users will have set up links and bookmarks to particular records within it. On balance we decided that it would be better to change the slugs and do our best to limit the impact on users.

So, we have changed the admin setting to ensure future slugs are generated using the identifier. We have run a script provided by Artefactual Systems that changed all the slugs that are already in the database. We have set up a series of redirects from all the old urls of top level descriptions in the catalogue to the new urls (note that having had a good look at the referrer report in Google Analytics it was apparent that external links to the catalogue generally point at top level descriptions).

Playing and testing

It was important to do a certain amount of testing and playing around with AtoM 2.4 and it was important that it wasn't just myself who did this - I encouraged all my colleagues to also have a go.

First I checked the release notes for versions 2.3 and 2.4 so I had a good sense of what had changed and where I should focus my attention. I was then able to test these new features and direct colleagues to them as appropriate for further testing or discussion.

While doing so, I tried to think about whether any of these changes would necessitate changes in our workflows and processes or updates to our staff handbook.

As an example - it was noted that there was a new field to record occupations for authority records. Rather than letting individuals to decide how to use this field, it is important to agree an institutional approach and consider an appropriate methodology or taxonomy. As it happens, we have decided not to use this field for the time being and this will be documented accordingly.

Assessing known bugs

Being a bit late to the upgrade party gives us the opportunity to assess known bugs and issues with a release. I spent some time looking at Artefactual's issues log for AtoM and establish if any of them were going to cause us major problems or required a workaround to be put in place.

There are lots of issues recorded and I looked through many of them (but not all!). Fortunately, very few looked like they would have an impact on us. Most related to functionality we don't utilise - such as the ability to use AtoM with multiple institutions or translate it into multiple languages.

The one bug that I thought would be irritating for us was related to the accessions counter which was not incrementing in version 2.4. Having spent a bit of time testing, it seemed that this wasn't a deal breaker for us and there was a workaround we could put in place to enable staff to continue to create accession records with a unique identifier relatively easily.

Testing local workarounds

Next I tested one of the local workarounds we have for AtoM. We use a CSS print stylesheet to help us to generate an accessions report to send donors and depositors to confirm receipt of an archive. This still worked in the new version of AtoM with no issues. Hoorah!

Look and feel

We gave a bit of thought to how AtoM should be styled. Two years ago we went live with a slightly customised version of the Dominion theme. This had been styled to look similar to our website (which at the time was branded orange).

In the last year, the look and feel of the University website has changed and we are no longer orange! Some thought needed to be given to whether we should change the look of our catalogue now to keep it consistent with our website. After some discussion it was agreed that our existing AtoM theme should be maintained for the time being.

We did however think it was a good idea to adopt the font of the University website, but when we tested this out on our AtoM instance it didn't look as clear...so that decision was quickly reversed.

Usability testing

When we first launched our catalogue we carried out a couple of rounds of user testing (read about it here and here) but this was quite a major piece of work and took up a substantial amount of staff time.

With this upgrade we were keen to give some consideration to the user experience but didn't have resource to invest in more user testing.

Instead we recruited the Senior User Experience Designer at our institution to cast his eye over our version of AtoM 2.4 and give us some independent feedback on usability and accessibility. It was really useful to get a fresh pair of eyes to look at our site, but as this could be a whole blog post in itself so I won't say anymore here...watch this space!

Updating our help pages

Another job was to update both the text and the screenshots on our static help pages within AtoM. There have been several changes since 2.2 and some of these are reflected in the look and feel of the interface. 

The advanced search looks a bit different in version 2.4 - here is the refreshed screenshot for our help pages

We were also keen to add in some help for our users around the clipboard feature and to explain how the full width treeview works.

The icons for different types of information within AtoM have also been brought out more strongly in this version, so we also wanted to flag up what these meant for our users.


...and that reminds me, we really do need a less Canada-centric way to indicate place!

Updating our staff handbook

Since we adopted AtoM a few years ago we have developed a whole suite of staff manuals which record how we use AtoM, including tips for carrying out certain procedures and information about what to put in each field. With the new changes brought in with this upgrade, we of course had to update our internal documentation.

When to upgrade?

As we drew ever closer to our 'go live' date for the upgrade we were aware that Artefactual were busy preparing their 2.4.1 bug fix release. We were very keen to get the bug fixes (particularly for that accessions counter bug that I mentioned) but were not sure how long we were prepared to wait.

Luckily with helpful advice from Artefactual we were able to follow some instructions from the user forum and install from the GitHub code repository instead of the tarball download on the website. This meant we could benefit from those bug fixes that were already stable (and pull others to test as they become available) without having to wait for the formal 2.4.1 release.

No need to delay our upgrade further!

As it happens it was good news we upgraded when we did. The day before the upgrade we hit a bug in version 2.2 during a re-index of elasticsearch. Nice to know we had a nice clean version of 2.4 ready to go the next day!

Finishing touches

On the 'go live' date we'd put word around to staff not to edit the catalogue while we did the switch. Our systems administrator got all the data from our production version of 2.2 freshly loaded into 2.4, ran the scripts to change the slugs and re-indexed the database. I just needed to do a few things before we asked IT to do the Domain Name System switch.

First I needed to check all the admin settings were right - a few final tweaks were required here and there. Second I needed to load up the Borthwick logo and banner to our archival institution record. Thirdly I needed to paste the new help and FAQ text into the static pages (I already had this prepared and saved elsewhere).

Once the DNS switch was done we were live at last! 

Sharing the news

Of course we wanted to publicise the upgrade to our users and tell them about the new features that it brings.

We've put AtoM back on the front page of our website and added a news item.

Let's tell the world all about it, with a catalogue banner and news item

My colleague has written a great blog post aimed at our users and telling them all about the new features, and of course we've all been enthusiastically tweeting!


...and a whole lot of tweeting

Future work

The upgrade is done but work continues. We need to ensure harvesting to our library catalogue still works and of course test out the new EAD harvesting functionality. Later today we will be looking at Search Engine Optimisation (particularly important since we changed our slugs). We also have some remaining tasks around finding aids - uploading pdfs of finding aids for those archives that aren't yet fully catalogued in AtoM using the new functionality in 2.4.

But right now I've got a few broken links to fix...


The anatomy of an AtoM upgrade

Published 4 May 2018 by Jenny Mitcham in Digital Archiving at the University of York.

Yesterday we went live with our new upgraded production version of AtoM.

We've been using AtoM version 2.2 since we first unveiled the Borthwick Catalogue to the world two years ago. Now we have finally taken the leap to version 2.4.

We are thrilled to benefit from some of the new features - including the clipboard, being able to search by date range and the full width treeview. Of course we are also keen to test the work we jointly sponsored last year around exposing EAD via OAI-PMH for harvesting.

But what has taken us so long you might ask?

...well, upgrading AtoM has been a new experience for us and one that has involved a lot of planning behind the scenes. The technical process of upgrading has been ably handled by our systems administrator. Much of his initial work behind the scenes has been on 'puppetising' AtoM to make it easier to manage multiple versions of AtoM going forward. In this post though I will focus on the less technical steps we have taken to manage the upgrade and the decisions we have made along the way.

Checking the admin settings

One of the first things I did when I was given a test version of 2.4 to play with was to check out all of the admin settings to see what had changed.

All of our admin settings for AtoM are documented in a spreadsheet alongside a rationale for our decisions. I wanted to take some time to understand the new settings, read the documentation and decide what would work for us.

Some of these decisions were taken to a meeting for a larger group of staff to discuss. I've got a good sense of how we use AtoM but I am not really an AtoM user so it was important that others were involved in the decision making.

Most decisions were relatively straightforward and uncontroversial but the one that we spent most time on was deciding whether or not to change the slugs...

Slugs

In AtoM, the 'slug' is the last element of the url for each individual record within the catalogue - it has to be unique so that all the urls go to the right place. In previous versions of AtoM the slugs were automatically generated from the title of each record. This led to some interesting and varied urls.


Slugs are therefore hard to predict ...and it is not always possible to look at a slug and know which archive it refers to.

This possibly doesn't matter, but could become an issue for us in the future should we wish to carry out more automated data manipulation or system integrations.

AtoM 2.4 now allows you to choose which fields your slugs are generated from. We have decided that it would be better if ours were generated from the identifier of the record rather than the title. The reason being that identifiers are generally quite short and sweet and of course should be unique (though we recently realised that this isn't enforced in AtoM).

But of course this is not a decision that can be taken lightly. Our catalogue has been live for 2 years now and users will have set up links and bookmarks to particular records within it. On balance we decided that it would be better to change the slugs and do our best to limit the impact on users.

So, we have changed the admin setting to ensure future slugs are generated using the identifier. We have run a script provided by Artefactual Systems that changed all the slugs that are already in the database. We have set up a series of redirects from all the old urls of top level descriptions in the catalogue to the new urls (note that having had a good look at the referrer report in Google Analytics it was apparent that external links to the catalogue generally point at top level descriptions).

Playing and testing

It was important to do a certain amount of testing and playing around with AtoM 2.4 and it was important that it wasn't just myself who did this - I encouraged all my colleagues to also have a go.

First I checked the release notes for versions 2.3 and 2.4 so I had a good sense of what had changed and where I should focus my attention. I was then able to test these new features and direct colleagues to them as appropriate for further testing or discussion.

While doing so, I tried to think about whether any of these changes would necessitate changes in our workflows and processes or updates to our staff handbook.

As an example - it was noted that there was a new field to record occupations for authority records. Rather than letting individuals to decide how to use this field, it is important to agree an institutional approach and consider an appropriate methodology or taxonomy. As it happens, we have decided not to use this field for the time being and this will be documented accordingly.

Assessing known bugs

Being a bit late to the upgrade party gives us the opportunity to assess known bugs and issues with a release. I spent some time looking at Artefactual's issues log for AtoM and establish if any of them were going to cause us major problems or required a workaround to be put in place.

There are lots of issues recorded and I looked through many of them (but not all!). Fortunately, very few looked like they would have an impact on us. Most related to functionality we don't utilise - such as the ability to use AtoM with multiple institutions or translate it into multiple languages.

The one bug that I thought would be irritating for us was related to the accessions counter which was not incrementing in version 2.4. Having spent a bit of time testing, it seemed that this wasn't a deal breaker for us and there was a workaround we could put in place to enable staff to continue to create accession records with a unique identifier relatively easily.

Testing local workarounds

Next I tested one of the local workarounds we have for AtoM. We use a CSS print stylesheet to help us to generate an accessions report to send donors and depositors to confirm receipt of an archive. This still worked in the new version of AtoM with no issues. Hoorah!

Look and feel

We gave a bit of thought to how AtoM should be styled. Two years ago we went live with a slightly customised version of the Dominion theme. This had been styled to look similar to our website (which at the time was branded orange).

In the last year, the look and feel of the University website has changed and we are no longer orange! Some thought needed to be given to whether we should change the look of our catalogue now to keep it consistent with our website. After some discussion it was agreed that our existing AtoM theme should be maintained for the time being.

We did however think it was a good idea to adopt the font of the University website, but when we tested this out on our AtoM instance it didn't look as clear...so that decision was quickly reversed.

Usability testing

When we first launched our catalogue we carried out a couple of rounds of user testing (read about it here and here) but this was quite a major piece of work and took up a substantial amount of staff time.

With this upgrade we were keen to give some consideration to the user experience but didn't have resource to invest in more user testing.

Instead we recruited the Senior User Experience Designer at our institution to cast his eye over our version of AtoM 2.4 and give us some independent feedback on usability and accessibility. It was really useful to get a fresh pair of eyes to look at our site, but as this could be a whole blog post in itself so I won't say anymore here...watch this space!

Updating our help pages

Another job was to update both the text and the screenshots on our static help pages within AtoM. There have been several changes since 2.2 and some of these are reflected in the look and feel of the interface. 

The advanced search looks a bit different in version 2.4 - here is the refreshed screenshot for our help pages

We were also keen to add in some help for our users around the clipboard feature and to explain how the full width treeview works.

The icons for different types of information within AtoM have also been brought out more strongly in this version, so we also wanted to flag up what these meant for our users.


...and that reminds me, we really do need a less Canada-centric way to indicate place!

Updating our staff handbook

Since we adopted AtoM a few years ago we have developed a whole suite of staff manuals which record how we use AtoM, including tips for carrying out certain procedures and information about what to put in each field. With the new changes brought in with this upgrade, we of course had to update our internal documentation.

When to upgrade?

As we drew ever closer to our 'go live' date for the upgrade we were aware that Artefactual were busy preparing their 2.4.1 bug fix release. We were very keen to get the bug fixes (particularly for that accessions counter bug that I mentioned) but were not sure how long we were prepared to wait.

Luckily with helpful advice from Artefactual we were able to follow some instructions from the user forum and install from the GitHub code repository instead of the tarball download on the website. This meant we could benefit from those bug fixes that were already stable (and pull others to test as they become available) without having to wait for the formal 2.4.1 release.

No need to delay our upgrade further!

As it happens it was good news we upgraded when we did. The day before the upgrade we hit a bug in version 2.2 during a re-index of elasticsearch. Nice to know we had a nice clean version of 2.4 ready to go the next day!

Finishing touches

On the 'go live' date we'd put word around to staff not to edit the catalogue while we did the switch. Our systems administrator got all the data from our production version of 2.2 freshly loaded into 2.4, ran the scripts to change the slugs and re-indexed the database. I just needed to do a few things before we asked IT to do the Domain Name System switch.

First I needed to check all the admin settings were right - a few final tweaks were required here and there. Second I needed to load up the Borthwick logo and banner to our archival institution record. Thirdly I needed to paste the new help and FAQ text into the static pages (I already had this prepared and saved elsewhere).

Once the DNS switch was done we were live at last! 

Sharing the news

Of course we wanted to publicise the upgrade to our users and tell them about the new features that it brings.

We've put AtoM back on the front page of our website and added a news item.

Let's tell the world all about it, with a catalogue banner and news item

My colleague has written a great blog post aimed at our users and telling them all about the new features, and of course we've all been enthusiastically tweeting!


...and a whole lot of tweeting

Future work

The upgrade is done but work continues. We need to ensure harvesting to our library catalogue still works and of course test out the new EAD harvesting functionality. Later today we will be looking at Search Engine Optimisation (particularly important since we changed our slugs). We also have some remaining tasks around finding aids - uploading pdfs of finding aids for those archives that aren't yet fully catalogued in AtoM using the new functionality in 2.4.

But right now I've got a few broken links to fix...


Web Inventor, Tim Berners-Lee, and W3C Track at The Web Conference 2018

Published 3 May 2018 by Coralie Mercier in W3C Blog.

WWW2018 logoW3C once again joined The Web Conference, previously known as the WWW Conference, for The Web Conference 2018 which took place last week in Lyon and featured a W3C track, exhibition booth, and tutorials.

W3C Tutorials took place on Monday and Tuesday of the week with a focus on Data Visualization, Audio on the Web, and Media advances on the Web Platform.
Between Wednesday and Friday, we welcomed conference attendees at the W3C booth.
At the W3C Track on Friday, experts from W3C Members and W3C Team broached topics including new trends on the Web Platform, WebAssembly, WebXR, Web of Things, Social Web Protocols and the foundations of trust as well as Intelligent Search on the Web. The Next Big Thing of Web: Future Web Outlook session ended with a panel about the future of the Web and gave way to active interaction with the audience on hot topics they care about for the Web.

The conference was a huge success, with 2300 participants from more than 60 countries, its program was packed with interesting topics and the line-up of speakers and panelits was outstanding. W3C Tutorials and W3C Track were very well attended.

Our Director, Tim Berners-Lee participated in the Thursday plenary panel on “Artificial Intelligence & the Web” with five other distinguished panelists from Amazon, Google, eBay, Facebook and Southampton University after the keynote on Conversational AI for Interacting with Digital and Physical World by Ruhi Sarikaya, Director of Applied Science, from the Amazon Alexa team.

You can watch the panel: AI and the future of the Web and the Internet (Sir Tim Berners-Lee (MIT, W3C), Antoine Bordes (Facebook, FAIR Paris), Vinton Cerf (Google), Kira Radinsky (eBay), Ruhi Sarikaya (Amazon), Prof. Dame Wendy Hall (Southampton University) – Chair), as well as the closing address by Mounir Mahjoubi, French Secretary of State for Digital, who shared excellent and insightful remarks on the state of AI and the role of the State on AI.

“This week brings together so many vital areas – industry, research, and those continuing to build the Web. Later this year we will reach a tipping point where 50% of the people in the world are on the Web. At this time of uncertainty and concerns about how the Web can be used for ill as well as good, we may ask what kind of Web this next 50% will be joining. So it is heartening to be amongst so many people working to make the Web better. At W3C, we’re working on innovative technology and Web standards. We ensure the process remains open, fair, international, fosters accessibility, and with a level-playing field. For such powerful technologies require our standards to be developed with trust, consensus and in the open.”

— Sir Tim Berners-Lee, MIT/World Wide Web Consortium, Inventor of the Web.

Tim Berners-Lee talking at The Web Conference 2018

The full list of videos and interviews are available from the The Web Conference’s channel. Here are the links to the video of the three keynotes of this year’s Conference:


New subscription form

Published 3 May 2018 by Pierrick Le Gall in The Piwigo.com Blog.

Piwigo.com subscription form gets a full redesign, with 3 goals in mind: improve VAT management, give choice between Individual and Enterprise plans, make it possible to subscribe for several years.

1) VAT and european laws

Piwigo.com hosting service is managed by a French company. Clients are coming from all over the world, among more than 70 countries. Depending on your country, the VAT (Value Added Tax) does not apply in the same way. Furthermore, if you or your organisation has a VAT number, other rules apply. No need to go further into technical details. Keep in mind that we need to collect a few data to accurately give VAT back to appropriate countries.

Piwigo.com subscription: we need a few data from you!

Piwigo.com subscription: we need a few data from you!

As long as we don’t have them, the subscription will keep asking for a few data.

Piwigo.com subscription: give your country and your VAT number, if you have one.

Piwigo.com subscription: give your country and your VAT number, if you have one.

Side note: this new rule dates back from 2016 and we were not always asking for your country/VAT number. Until now, we guessed your country based on your IP address and we considered Enterprise clients had a VAT number and Individual clients had none. From now on we won’t have to guess anymore!

2) Choice between Individual and Enterprise plans

Piwigo.com subscription: are you an individual or an organisation?

Piwigo.com subscription: are you an individual or an organisation?

In its early days Piwigo.com was only selling a 39 euros (per year) plan, for individuals. Since 2017, Piwigo.com Enterprise plans, now official! Enterprise clients still had to contact us to create an order. The new subscription form gives organisations the power to create Enterprise orders by themselves, without needing help from Piwigo.com support.

Piwigo.com subscriptions: several options for the Enterprise plan

Piwigo.com subscriptions: several options for the Enterprise plan

3) Directly subscribe for 2 or 3 years

Piwigo.com subscription for Individual plan: select your duration

Piwigo.com subscription for Individual plan: select your duration

Interesting for Piwigo.com: it will obviously give us the opportunity to increase our available cash. More cash in the bank makes some investments possible, like recruitment.

Interesting for clients: by taking 2 years, you get a 10% discount, or 8€ less on your bill. By taking 3 years, discount is 20%, meaning 23€ kept in your pocket. As it is very usual our clients stay 5 years or more, we think it is a good deal!


PHPWeekly May 3rd 2018

Published 3 May 2018 by in PHP Weekly Archive Feed.

PHPWeekly May 3rd 2018
Curated news all about PHP.  Here's the latest edition
PHP Weekly 3rd May 2018
Welcome Back PHP Fans.

The famous 30 Days of Testing Challenge is back! This months theme is E-Commerce testing, with one challenge set per day of the month plus a bonus challenge at the end.

Also this week, after starting, but never finishing, a book on Conference Speaking for Everyone last year, Gary Hockin has posted the first and only finished chapter. All feedback welcome!

The Three Devs and a Maybe Podcast team is joined by Jay Smith, talking all things crypto currency.

Plus PHPDay takes place next week in Verona. Looking at new development traits and best practices, this conference is aimed at IT managers, developers and innovators.

And finally, the May edition of php[architect] magazine has just been released. Titled Treasure, Old and New, this issue looks at clearly documenting when you're debugging, focusing on keeping your application code shiny. 

Enjoy your read,

Cheers
Ade and Katie
 

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.

Articles

Choosing PHP in 2018
I bet you do not see those words very often. We live in a time where there are a plethora of programming languages and frameworks. As web developers, we have so many options, it can be very difficult to know what to learn and use to build a modern website.

Remote Working
Since I've been working remotely since last October, I was curious to read more about it in a book by people who have a thriving business with lots of remote workers. At least there should be some useful suggestions in it, and some reassurance that some of my own remote working troubles were nothing special. I found both in this book. Together with some powerful quotes from the book, I wanted to share some of my own discoveries about remote working with you in this post.

30 Days of E-Commerce Testing
Our famous 30 Days of Testing Challenge is back! The theme this time is E-Commerce Testing and this challenge has been kindly sponsored by Sauce Labs - check out their free Selenium Bootcamp, written by Selenium ninja Dave Haeffner.

Celebrate the WordPress 15th Anniversary
May 27, 2018 is the 15th anniversary of the first WordPress release — and we can’t wait to celebrate! Join WordPress fans all over the world in celebrating the 15th Anniversary of WordPress by throwing your own party!

Tutorials and Talks

Creating Custom Stream Filters in PHP
In this post we will see how to create a custom stream filter. Streams, first introduced in PHP 4.3, provide an abstration layer for file access. A number of different resources besides files – like network connections, compression protocols etc. can be regarded as “streams” of data which can be serially read and written to.

How Many Parameters Is Too Many?
Now, that is a classic question, that is often a minefield for anyone writing an increasing long list of argument in a method, or simply trying to set up auditing tools. Obviously, the answer is not immediate. Parameters may be needed, but on the other hands, currying functions allows to reduce the amount of parameter to one for every function. In between, probably exists a reasonable level that is a golden rule, and also very elusive. So, we decided to check the current practice in PHP code.

Solitary or Sociable? Testing Events and Listeners using Laravel
Testing with Laravel is very easy, but it can be a nightmare when the tests depend on Events and Listeners. In this post I’m gonna show you how you can simplify and improve those tests.

Array Destructuring in PHP
In my day to day job I write in a number of programming languages. The majority of my time is spent writing PHP but I very much enjoy writing other languages, such as Go and Javascript, too!

Lightweight Breadcrumbs in Laravel
Breadcrumbs are important in web applications. But most of the time, it’s not the easiest to track the different levels in your URL and generate breadcrumbs from it. Now we give it a try with a simple yet elegant solution.

Creating A React Native App for iOS
React Native is a great JavaScript framework for creating cross-platform, native apps. With one single codebase you can support both Android and iOS. A fellow PHP developer, Jordan Walke, created React to help manage the front end of Facebook back in 2011. React has since been expanded and also spawned its sidekick React Native.  There are multiple reasons to use React Native as opposed to some other cross-platform tool.

How To Deploy Laravel to Kubernetes
Laravel is an excellent framework for developing PHP applications. Whether you need to prototype a new idea, develop an MVP (Minimum Viable Product) or release a full-fledged enterprise system, Laravel facilitates all of the development tasks and workflows. In this article, I’ll explain how to deal with the simple requirement of running a Laravel application as a local Kubernetes set up.

Context Passing
I'm working on another "multi-tenant" PHP web application project and I noticed an interesting series of events. It felt like a natural progression and by means of a bit of dangerous induction, I'm posing the hypothesis that this is how things are just bound to happen in such projects.

How to Set Up a Full-Text Search Using Scout in Laravel
Full-text search is crucial for allowing users to navigate content-rich websites. In this post, I'll show you how to implement full-text search for a Laravel app. In fact, we'll use the Laravel Scout library, which makes implementation of full-text search easy and fun.

Running the Laravel Scheduler and Queue with Docker
In Laravel, one of the tricky changes when switching from a virtual server to Docker is figuring out how to run a scheduler and a queue worker. I see this question come up quite a bit when PHP developers are trying to figure out how to use Laravel with Docker.

Migrate Your Local PHP 7.2 Setup to Homebrew v1.5.*
Last week, Hans Puac, a colleague of mine, wrote a small guide into our internal company chat on how to migrate your local PHP environment on macOS to the new Homebrew version 1.5.*. The guide helped a lot of other engineers inside trivago. I thought it might help more people from the internet. I asked Hans if I am allowed to share it, and he approved. So kudos belongs to him.

A Quick Controversial Lesson on Type Hints
One lesson I’ve learned the hard way, even when testing, is that you like to write your tests from a clean database. So you make a factory User whose id is 1, and a factory Customer whose id is 1, and two factory Products whose ids are 1 and 2… It is very easy to make your tests pass with the wrong values being inserted if you aren’t careful.

How I Got into Static Trap and Made Fool of Myself
Today the format will be reversed - first I'll show you practical code and its journey to legacy, then theory takeaways that would save it.

Automatically Close Stale Issues and Pull Requests
At Spatie we have over 180 public repositories. Some of our packages have become quite popular. We're very grateful that many of our users open up issues and PRs to ask questions, notify us of problems and try to solve those problems.
News and Announcements

PHP 7.2.5 Released
The PHP development team announces the immediate availability of PHP 7.2.5. This is a security release which also contains several minor bug fixes.

PHP 7.0.30 Released
The PHP development team announces the immediate availability of PHP 7.0.30. This is a security release. Several security bugs have been fixed in this release. All PHP 7.0 users are encouraged to upgrade to this version.

PHP 7.1.17 Released
The PHP development team announces the immediate availability of PHP 7.1.17. This is a security fix release, containing many bugfixes. All PHP 7.1 users are encouraged to upgrade to this version.

PHP 5.6.36 Released
The PHP development team announces the immediate availability of PHP 5.6.36. This is a security release. Several security bugs have been fixed in this release. All PHP 5.6 users are encouraged to upgrade to this version.

Laravel 5.6.19 Released
Laravel version 5.6.19 was released yesterday with support for multiple CC, BCC, and Reply-To recipients. Also, the Optional class now implements the __isset() magic method.

MySQL 8.0 Released With New Features and Improved Performance
The MySQL development team has announced the General Availability of the MySQL 8.0.0 Open Source database.

PHPDay Conference - May 11-12th 2018, Verona
PHPDay is aimed at IT managers, developers and innovators. We'll show new development traits, best-practices and success cases related to quality, revision control, test-driven development, continuous integration and so on. There are also talks about design, project management, agile and various php-related technologies like Zend Framework, Symfony, Laravel, Drupal, WordPress and more. Tickets are on sale now.

China PHP Conference - May 19-20th 2018, Shanghai
We will be hosting a 2-day event filled with high quality, technical sessions about PHP Core, PHP High Performance, PHP Engineering, AI and Blockchain more. Don’t miss out on 2-great days sessions, delicious food, fantastic shows and countless networking opportunities to engage with speakers and delegates. Tickets are on sale now.

PHP Serbia Conference - May 25-27th 2018, Belgrade
PHP Serbia Conference delivers high-value technical content about PHP and related web technologies, architecture, best practices and testing. It offers two days of amazing talks by some of the most prominent experts and professionals in the PHP world in a comfortable and professional setting. Tickets are on sale now.

Podcasts

Three Devs and a Maybe Podcast - Proof of Everything with Jay Smith
In this week’s episode we are lucky to have Jay Smith back on the show to talk all things cryptocurrency. We start off the podcast by briefly recapping what’s been happening within the space since we last spoke. This leads us to discuss the Lighting Network running on the Bitcoin Mainnet, CryptoKitties, ERC-721 tokens and Ethereum Casper. From here we chat about Proof of Work, the environmental impacts of the protocol and how Proof of Stake differs. Finally, we chat about Web3, experiences using PIVX, Steemit and IPFS.

Post Status Draft Podcast - Designing The News
In this episode, Brian and Brian discuss a variety of news topics spanning design, development, and business. Tune in to learn about the history of WordPress and the web, the newest TechCrunch redesign, a WordCamp for WordCamp organisers, and more.

The Changelog Podcast #295: Scaling All The Things At Stack
Julia Grace joined the show to talk bout about scaling all the things at Slack. Julia is currently the Senior Director of Infrastructure Engineering at Slack, and has been their since 2015 — so she's seen Slack during its hyper-growth. We talked about Slack's growth and scale challenges, scaling engineering teams, the responsibilities and challenges of being a manager, communicating up and communicating down, quality of service and reliability, and what it takes to build high performing leadership teams. 

Reading and Viewing

Conference Speaking for Everyone - Submitting Chapter (Free)
This is a full chapter from my started-but-never-finished book called Conference Speaking for Everyone. I’m posting this finished single chapter that’s been sitting on my laptop for nearly a year to a) Get some feedback and hopefully some impetus to finish the book, and b) Put what I hope is some useful information into the hands of the people who need it. So enjoy the entire full chapter (all 3000+ words of it), and if you like it and would like to see the finished book… MAKE SURE TO PESTER ME ON TWITTER!

Cloudways Interview - James Gurd: Ecommerce - Trends, Scope and Future
James Gurd is among the rare talent who is always ready to help e-commerce enthusiasts learn more about the intricacies of the business. He hosts #EcomChat, a Twitter chat session every Monday, and is the CEO of Digital Juggler, an e-commerce marketing agency that focuses on building long-term relationships. In this interview, we talked to him about the latest trends in e-commerce, marketing strategies that still work, and white hat tactics for successful e-commerce marketing today.

How To Get A Programming Job Without A Degree
Sign up here to be one of the first to receive a free, 80 page guide on how to get a programming job without a degree. The guide will cover curated paths to pursuing programming careers with key resources such as those in artificial intelligence or web development, technical foundations, and different proven techniques to get jobs even if you don’t have a programming background (the author having worked with and helped with input in one of the leading online bootcamps for technical skills, Springboard).

php[architect] Magazine May 2018 - Treasure, Old & New
You’ve probably been deep into debugging an issue, when you have a “How did this ever work?” moment. When you inherit someone else's codebase, you’ve also probably asked: “How is this supposed to work exactly?” I’ve found a lot of programming time is spent alternating between these two extremes. While you may be under pressure to just fix something or get a feature out the door, it’s worth taking a little time to pay-it-forward, usually to your future self, by documenting things clearly, adding tests, and choosing an easy-to-understand solution instead of a clever one. In this issue, we focus on how to keep your application code shiny. 

Jobs

German Speaking PHP Developer (m/f)
You’re proud to call yourself a nerd and consider programming in PHP to be more than just a job? You’d like to help us make our shop better and faster while simultaneously providing our customers with an unparalleled and flawless shopping experience? If you feel like this describes you and also happen to have a weakness for new technology, you’re just the person we’re looking for!




Do you have a position that you would like to fill? PHP Weekly is ideal for targeting developers and the cost is only $50/week for an advert.  Please let me know if you are interested by emailing me at katie@phpweekly.com

Interesting Projects, Tools and Libraries

symplify
Repository to develop Symplify packages. All PRs and issues here.

textpattern
A flexible, elegant, fast and easy-to-use content management system written in PHP. 

passwords
Easy to use yet feature-rich and secure password manager for Nextcloud.

msgpack.php
A pure PHP implementation of the MessagePack serialisation format.

phpboost
This web application allows everybody without any particular knowledge required in webmastering to create his own website.

una
UNA is a CMS - Community Management System - a full-stack web platform for creating and running a community website. 

copona
Copona is open source PHP digital e-commerce platform inspired and based on Opencart.

rssmonster
Google Reader inspired self-hosted RSS reader written in VueJS with a Laravel Lumen backend. RSSMonster is compatible with the Fever API.

laravel-blog
The purpose of this repository is to show good development practices on Laravel as well as to present cases of use of the framework's functionalities.

sulu
Sulu core bundles & components.

gitelephant
GitElephant is an abstraction layer to manage your git repositories with PHP.

CakePHP3
Plugin for creating and/or rendering PDFs, supporting several popular PDF engines.

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.
 

So, how did you like this issue?

Like us on FacebookFollow us on Twitter
We are still trying to grow our list. If you find PHP Weekly useful please tweet about us! Thanks.
Also, if you have a site or blog related to PHP then please link through to our site.

unsubscribe from this list | update subscription preferences 
 
Copyright © 2018 PHP Weekly, All rights reserved.
Email Marketing Powered by MailChimp

New tool: Wikimedia APT browser

Published 2 May 2018 by legoktm in The Lego Mirror.

I've created a new tool to make it easier for humans to browse Wikimedia's APT repository: apt.wikimedia.org. Wikimedia's servers run Debian (Ubuntu is nearly phased out), and for the most part use the standard packages that Debian provides. But in some cases we use software that isn't in the official Debian repositories, and distribute it via our own APT repository.

For a while now I've been working on different things where it's helpful for me to be able to see which packages are provided for each Debian version. I was unable to find any existing, reusable HTML browsers for APT repositories (most people seem to use the commandline tools), so I quickly wrote my own.

Introducing the Wikimedia APT browser. It's a short (less than 100 lines) Python and Flask application that reads from the Package/Release files that APT uses, and presents them in a simple HTML page. You can see the different versions of Debian and Ubuntu that are supported, the different sections in each one, and then the packages and their versions.

There's nothing really Wikimedia-specific about this, it would be trivial to remove the Wikimedia branding and turn it into something general if people are interested.

The source code is published on Phabricator and licensed under the AGPL v3, or any later version.


Simplify Container Orchestration: Announcing Early Access to DigitalOcean Kubernetes

Published 2 May 2018 by Jamie Wilson in The DigitalOcean Blog.

Simplify Container Orchestration: Announcing Early Access to DigitalOcean Kubernetes

Over the last 18 months, we’ve delivered many cloud primitives to serve developers and their teams in our unique DO-Simple way. We introduced Load Balancers, Monitoring and Alerts, Cloud Firewalls, Spaces, CPU-Optimized Droplets, a new Dashboard, and new Droplet pricing plans. We extended the availability of Block Storage to all regions. All of these primitives make it easier to go from an idea to production without the overhead and complexity of managing cloud infrastructure.

Today, we’re excited to build on those primitives and announce DigitalOcean Kubernetes, a simple and cost-effective way to deploy, orchestrate, and manage container workloads. Deploying workloads as containers provides many benefits for developers, from rapid deployment to isolation and security. But orchestrating those workloads comes with additional layers of complexity that can be difficult for development teams to manage.

Kubernetes has become the leading open source platform for orchestration, with thousands of contributors in the last year alone. DigitalOcean has been running large workloads on Kubernetes over the past two years, and we’re excited to bring our learnings and expertise to our customers.

We designed DigitalOcean Kubernetes with developers and their teams in mind, so you can save time and deploy your container workloads without needing to configure everything from scratch. Automatic deployment of load balancers, block storage, firewalls, ingress controllers, and more makes configuring your cluster on DigitalOcean as simple as deploying a Droplet.

We understand having your data close to your cluster is essential, so you’ll have the option to deploy a private container registry to your cluster with no configuration, and store the images on DigitalOcean Spaces.

In addition to offering Kubernetes on our platform, we are also upgrading our CNCF membership to Gold. We’re committed to contributing to and supporting the open source technologies around containers, and are looking forward to working with CNCF members to continue the evolution of these and related technologies.

The DigitalOcean Kubernetes Early Access Program sign-up starts today, and access for select users begins next month. If you’re part of the program, your cluster will be free through September 2018.

Simplify Container Orchestration: Announcing Early Access to DigitalOcean Kubernetes

UPDATE: June 21, 2018

Since we announced DigitalOcean Kubernetes in May, we've received 20,000 sign-ups for early access. We’re excited to announce our first phase of early access, and want to take this opportunity to share more about our plans.

We will be sending out early access invitations in two phases:

We want to keep everyone up to date on our progress, so we’ll also provide email updates during early access as new product functionality is added, and as our Community team creates new Kubernetes content. Finally, look out for a webinar invitation where we’ll walk you through the early access product as we’d love to hear your feedback through this process.

Happy Coding,
DigitalOcean Kubernetes Product Team


Episode 7: Dan Barrett

Published 1 May 2018 by Yaron Koren in Between the Brackets: a MediaWiki Podcast.

Dan Barrett is a longtime developer and project manager who worked for fifteen years at Vistaprint, where he created, and oversaw the development and maintenance of, the MediaWiki installation. He has also written numerous technical books for O'Reilly, including the 2008 book MediaWiki: Wikipedia and Beyond.   Links for some of the topics discussed:  

Mediawiki - How to disable diacritic/special characters in the URL

Published 1 May 2018 by Magorzata Mrwka Zielona Mrwka in Newest questions tagged mediawiki - Webmasters Stack Exchange.

My mediawiki page is in Polish and the mediawiki converts page titles to url in 1:1 style - it means that page named Truteń will have url that contains Truteń with the Polish diacritic character "ń".

This causes code 400 error to appear when I'm trying to preview pages that contain polish diacritic letters (ąęćśźżółń) in Facebook Sharing debbuger. Those also do not display informations from extensions (excerpts from pages while sharing). Miracously main page works, even though it has diacritic character in its url ("Strona główna").

How can I disable those characters? I can also add that I'm almost total newbie and am using OVH services (the cheapest plan they had). Here is an example page - http://wiki.mrowki.ovh/index.php?title=Truteń


GLAM Blog Club – May 2018

Published 1 May 2018 by Nik McGrath in newCardigan.

Andrew kicked off on the theme of control with Joy Division’s She’s Lost Control. Play the track and read on… And she gave away the secrets of her past… Andrew argues for and against copyright in the case of researchers accessing special collections. Control measures by some libraries are put in place preventing digital copies of donor material being made without donor permission. Should libraries take a risk, like some do, and place the onus or control back in the hands of the user to do the right thing, making digital copies for reference but trusting users not to break copyright?

Phillipa, a PhD student, took time off from her PhD to care for her daughter who was diagnosed with Stage 4 lymphatic cancer. “I am outwardly an organised student, but library books were the last thing on my mind as I struggled to appear normal and in control”. The tale of 23 Overdue Books is about feeling out of control, receiving a $1000 library fine, and ultimately the compassion of a librarian who waived the fine.

Michelle’s blog Controlling your online data and privacy gives some fantastic tips about how to protect your privacy online. “You don’t need these companies to control all your data for you…”

Control your files, Niamh states: “Control over your files does take a little time to set up, but the benefits are that your information will be searchable, backed up, restorable and reusable.” “Try to leave your files in a state that the future version of you can use.” Walk the talk.

Hugh is the technical genius behind newCardigan’s systems. In his blog Building our own house Hugh describes the journey to setup systems protecting the privacy of our members and participants. “We’re not quite running our own servers in the spare room, but I’m pretty happy with how far we’ve managed to move towards running our own systems so we don’t force members and participants to hand over data to third parties just so they can socialise with other GLAM people. As much as possible, it’s newCardigan members, or at worst, newCardigan as an organisation, in control.”

Control those tabs, Kathryn gives a guide on how to setup preferences with Chrome for websites that you access daily.

Sam’s blog Getting to “good enough”: thoughts on perfectionism is an honest analysis and reflection on the negative aspects of perfectionism in the workplace.

Libraries becoming the new park, Melly argues for the need for librarians and library technicians to continue to manage public libraries, arguing against the trend in public libraries for using library spaces for other purposes and understaffing with the notion that customers can serve their own needs within the library. “If public parks cannot control human behaviour, what about libraries without staff?”

Amy challenges us to control our present, future, environment, thoughts, voice and relationships in her blog Taking control of the small things.

Want to be a happier librarian? You’re in control! Anne believes that happiness is something we control: “It doesn’t help to get upset or anxious about things you can’t control so focus on the things that you can.”

Sarah’s Control of GLAMR information … in my inbox is all about taking control of the subscriptions that overload us with information in our inbox, in this case GLAMR information! What is still relevant, and what is information Sarah receives in other ways.

GLAM Blog Club – Control Kara acknowledges that control of her career is difficult to attain, but perhaps it’s important to celebrate the small wins. I think most people often feel out of control of their career, but joining the conversation here is definitely a win! Thanks Kara.

My blog Democratisation in action, I argue that: “Although it’s important that archivists maintain control of the systems that ensure items are trackable and findable, it is also important that archivists enable access. Raising the profile of archival collections and awareness of the content available within collections provides more opportunities for individuals from diverse backgrounds to interpret archival material in new and interesting ways. This is democratisation in action.”

Matthew’s Custodial Control of Digital Assets makes a compelling argument for case by case consideration in collecting born digital items: “…you cannot always control what you receive when it comes to digital collections. Standards are there for guidance and sometimes decisions need to be made on whether to allow something into the collection that does not meet them. The intrinsic value of the object, its uniqueness and rarity may very well trump the technical requirements for digital collecting. When dealing with born-digital photographs for example, where some institutions prefer a Camera Raw or uncompressed TIFF file format, a low resolution JPEG would also be accepted under the right circumstances.”

The terror and value of asking for feedback, Stacey gives advice that feedback is valuable, so it’s worth giving up control by: “Putting things out there and asking for feedback…”

Queerying the catalogue: Control, classification, chaos, curiosity, care and communities, Clare is “reflecting on the problematic histories of classification in librarianship and in psychology, particularly in relation to LGBTIQA+ communities, my complicated relationship with labels, and the power of play to help librarians become more comfortable with letting go of at least some of our control and authority, find courage in chaos, embrace fluidity, and change the system.”

Associate, collocate, disambiguate, infuriate, Alissa on her thoughts on “…relinquishing some of my control over the form and display of titles within a catalogue.”

GLAM Blog Club – Control, Rebecca questions: “So what happens when you put a control freak into the world of museums?” Weekly goal lists, problem solving skills and throwing yourself into the deep end, will help you no end.

Authority Control – Can I haz it? Clare on the world of cataloguing and control vocabs, putting theory into practice.

Thank you for your blogs on control, it proved to be a popular theme!

Have you ever walked into a gallery and cried at the sight of a painting? Felt waves of emotion reading a letter in the archives? Have you reacted passionately about something you care deeply about in a meeting at work?

Passion is our theme for GLAM Blog Club this month.

Some might argue that passion is the opposite of control. We anticipate a lovely contrast between last month and this month’s blogs.

Please don’t forget to use the tag GLAM Blog Club in your post, and #GLAMBlogClub for any social media posts linking to it. If you haven’t done so yet, remember to register your blog at Aus GLAM Blogs. Happy blogging!


Wikibase of Wikibases

Published 30 Apr 2018 by addshore in Addshore.

The Wikibase registry was one of the outcomes of the first in a series of Federated wikibase workshops organised in partnership with the European research council.

The aim of the registry is to act as a central point for details of public Wikibase installs hosted around the web. Data held about the installs currently includes the URL for the home page, Query frontend URL and SPARQL API endpoint URL (if a query service exists).

During the workshop an initial data set was added, and this can be easily seen using the timeline view of the query service and a query that is explained within this post.

Setting up the Wikibase install

The registry is running on the WMF Cloud infrastructure using the wikibase and query service docker images on a single m1.medium VPS with 2 CPUs, 4GB RAM and 40GB disk.

The first step was to request the creation of a project for the install. The current process for this is to create a Phabricator ticket, and that ticket can be seen here.

Once the project was created I could head to horizon (the openstack management interface) and create a VPS to host the install.

I chose the m1.medium flavour for the 4GB memory allowance. As is currently documented in the wikibase-docker docker-compose example readme the setup can fail with less than 3GB memory due to the initial spike in memory usage when setting up the collection of docker services.

Once the machine was up and running I could install docker and docker-compose by following the docker docs for Debian (the OS I chose during the machine creation step).

With docker and docker-compose installed it was time to craft my own docker-compose.yml file based on the example currently present in the wikibase-docker repo.

The key environment variables to change were:

The docs for the environment variables are visible in the README for each image use for the service. For example the ‘wikibase’ image docs can be found in this README.
Once created it was time to start running the services using the following command:

user@wbregistry-01:~/wikibase-registry# docker-compose up -d
Creating network "wikibase-registry_default" with the default driver
Creating volume "wikibase-registry_mediawiki-mysql-data" with default driver
Creating volume "wikibase-registry_mediawiki-images-data" with default driver
Creating volume "wikibase-registry_query-service-data" with default driver
Creating wikibase-registry_mysql_1 ... done
Creating wikibase-registry_wdqs_1     ... done
Creating wikibase-registry_wikibase_1   ... done
Creating wikibase-registry_wdqs-proxy_1   ... done
Creating wikibase-registry_wdqs-updater_1  ... done
Creating wikibase-registry_wdqs-frontend_1 ... done

The output of the command stated that everything correctly started, and I double checked using the following:

user@wbregistry-01:~/wikibase-registry# docker-compose ps
              Name                             Command               State          Ports
-------------------------------------------------------------------------------------------------
wikibase-registry_mysql_1           docker-entrypoint.sh mysqld      Up      3306/tcp
wikibase-registry_wdqs-frontend_1   /entrypoint.sh nginx -g da ...   Up      0.0.0.0:8282->80/tcp
wikibase-registry_wdqs-proxy_1      /bin/sh -c "/entrypoint.sh"      Up      0.0.0.0:8989->80/tcp
wikibase-registry_wdqs-updater_1    /entrypoint.sh /runUpdate.sh     Up      9999/tcp
wikibase-registry_wdqs_1            /entrypoint.sh /runBlazegr ...   Up      9999/tcp
wikibase-registry_wikibase_1        /bin/sh /entrypoint.sh           Up      0.0.0.0:8181->80/tcp

Wikibase and the query service UI were exposed on ports 8181 and 8282 on the machine respectively, but the openstack firewall rules would block any access from outside the project by default, so I created 2 new rules allowing ingress from within the labs network (range 10.0.0.0/8).

I could then setup a web proxy in horizon to map some domains to the exposed ports on the machine.

With the proxies created the 2 services were then accessible to the outside world:

Adding some initial data

The first version of this repository was planned to just hold Items for Wikibase installs. The initial list of properties could be pretty straight forward. A link to the homepage of the wiki is of course useful, and enables navigating to the site. Sites may not expose a query service in a uniform way, so a property would also be needed for this. The SPARQL endpoint used by the query service could also differ thus another property would be needed. And finally to be able to display the initial data on a timeline, and initial creation date would be needed. I added a property for install logo to make the timeline a little prettier.

The properties created initially to describe Wikibase installs with (with example data values for wikidata.org) can be seen below:

Some other properties were also created:

I then added all other wikibase instances run by the WMF which included test and beta Wikidata sites. Wikiba.se also contains a list of Wikibase installs (although out of date). I also managed to find some new installs from wikiapiary looking at the Wikibase Repo extension usage. And of course some of the people in the room had instances to add to the list.

I based the creation date on the rough creation of the first item, or an official inception date. All of the creation date statements should probably have references.

The timeline query

The below SPARQL queries show the creation of a federated timeline query crossing the local wikibase query service (for the registry) and also the wikidata.org query service.

1) Select all Items with our date property (P5):

SELECT ?item ?date
WHERE {
    ?item wdt:P5 ?date .
}

2) Use the label service to select the Item Labels instead of IDs:

SELECT ?itemLabel ?date
WHERE {
    ?item wdt:P5 ?date .
    SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en" }
}

3) Also select the logo (P8) if it exists:

SELECT ?itemLabel ?date ?logo
WHERE {
    ?item wdt:P5 ?date .
    SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en" }
    OPTIONAL { ?item wdt:P8 ?logo }
}

4) Display the results on a timeline by default:

#defaultView:Timeline
SELECT ?itemLabel ?date ?logo
WHERE{
    ?item wdt:P5 ?date .
    SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en" }
    OPTIONAL { ?item wdt:P8 ?logo }
}

5) Also include some results from the wikidata.org query service (using federated queries) to show the WikidataCon events:

In this query new prefixes are needed for wikidata.org as the default “wd” and “wdt” prefixes point to the local wikibase install.
Q37807168 on wikidata.org is “WikidataCon” and P31 is “instance of”.

#defaultView:Timeline

PREFIX wd-wd: <http://www.wikidata.org/entity/>
PREFIX wd-wdt: <http://www.wikidata.org/prop/direct/>

SELECT ?itemLabel ?date (SAMPLE(?logo) AS ?image)
WHERE
{
  {
   ?item wdt:P5 ?date .
   SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en" }
   OPTIONAL { ?item wdt:P8 ?logo }
  }
 UNION
  {
   SERVICE <https://query.wikidata.org/sparql> {
    ?item wd-wdt:P31 wd-wd:Q37807168 .
    ?item wd-wdt:P580 ?date .
    SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en" }
    OPTIONAL { ?item wd-wdt:P154 ?logo }
   } 
  }
}
GROUP BY ?itemLabel ?date

This generates the timeline that you see at the top of the post.

Other issues noticed during setup

Some of the issues were known before this blog post, but others were fresh. Nonetheless if you are following along the following issues and tickets may be of help:

The post Wikibase of Wikibases appeared first on Addshore.


Catch Us in Copenhagen for KubeCon EU

Published 27 Apr 2018 by Jaime Woo in The DigitalOcean Blog.

Catch Us in Copenhagen for KubeCon EU

UPDATE: Catch the talks, now embedded below!

Next week is KubeCon EU in Copenhagen, Denmark. We're already drooling at the idea of diving into smørrebrød, perhaps near the famed Little Mermaid statue.

DigitalOcean will have two speakers and a booth at KubeCon EU:

On Wednesday, May 2, from 2:45 PM-3:20 PM, Matt Layher presents "How To Export Prometheus Metrics From Just About Anything."

Prometheus exporters bridge the gap between Prometheus and systems which cannot export metrics in the Prometheus format. During this talk, you will learn how to gather metrics from a wide variety of data sources, including files, network services, hardware devices, and system calls to the Linux kernel. You will also learn how to build a reliable Prometheus exporter using the Go programming language. This talk is intended for developers who are interested in bridging the gap between Prometheus and other hardware or software.

Then, on Thursday, May 3, Andrew Kim speaks from 2:45PM-3:20PM on "Global Container Networks on Kubernetes at DigitalOcean."

Building a container network that is reliable, fast and easy to operate has become increasingly important in DigitalOcean’s distributed systems running on Kubernetes. Today’s container networking technologies can be restrictive as Pod and Service IPs are not reachable externally which forces cluster administrators to operate load balancers. The addition of load balancers introduces new points of failure in a cluster and hinders observability since source IPs are either NAT’d or masqueraded.

This talk will be a deep dive of how DigitalOcean uses BGP, Anycast and a variety of open source technologies (kube-router, CNI, etc) to achieve a fast and reliable container network where Pod and Service IPs are reachable from anywhere on DigitalOcean’s global network. Design considerations for scalability, lessons learned in production and advanced use cases will also be discussed.

You can also catch us in Hall C, at booth number G-C06. We’ll be tending the booth, where we'll be giving demos and answering questions:

Vi snakkes ved!


cardiParty 2018-05 with Anna Burkey

Published 26 Apr 2018 by Justine in newCardigan.

Join Anna Burkey for a preview of the SLV's new StartSpace centre for early-stage entrepreneurs.

Find out more...


Getting Started with an Incident Communications Plan

Published 26 Apr 2018 by Blake Thorne in The DigitalOcean Blog.

Getting Started with an Incident Communications Plan

At Statuspage, we believe it’s never too early for a team to start thinking about an incident communications plan. When your first big incident happens is way too late. Unplanned downtime can cause customer churn and unmanageable inbound support volume. Just one hour of unplanned downtime can cost organizations more than $100,000—and often much more—according to the latest annual downtime survey from Information Technology Intelligence Consulting Research.

Some downtime is inevitable, even massive organizations experience outages from time to time. The good news is the harm from downtime can be mitigated by deploying reassuring context and information in a timely fashion. You may hope to never need an incident communications plan but, as any good Site Reliability Engineer (SRE) will tell you, hope is not a strategy.

Mapping out your team’s first incident communications strategy doesn’t have to be overly complex or resource-draining. In fact, you can accomplish it fairly quickly using these four steps:

Before the Incident

Know what constitutes an incident

Sometimes it’s hard to know what exactly to label as an “incident.” Here’s a set of guidelines Google SREs use, where if any one of the following is true the event is considered an incident:

Feel free to adopt these exact guidelines, adjust them, or write your own. “If any one of the following is true” is a good format. (Another helpful resource for mapping incident severity is this Severity Definitions guide from VMware.)

A note on playing it safe: in our experience it’s better to overcommunicate in situations where you’re uncertain. The inconvenience of closing the loop on an expected incident that never took off far outweighs the downside of playing catch up on incident comms hours into an incident.

“I’ll just fix this quickly before anyone notices,” is a slippery slope. You might gamble and win the first time you try that, but play the game enough and eventually you’ll lose.

Team Roles

Define key roles and expectations for incident responders. Clear labels and expectations can prevent a lot of damage in the heat of an incident. While large teams and complex SRE organizations have a web of roles and responsibilities, we see two roles as a good starting point.

Incident commander

The incident commander is in charge of the incident response, making sure everyone is working toward resolution and following through on their tasks. They also are in charge of setting up any communications and documentation channels for the incident. That could be chat rooms, shared pages for documenting the incident, and even physical spaces in the office. This person also drives the post-incident review.

Communicator

The communicator is in charge of translating the technical information into customer communications and getting those communications out via the right channels. They also monitor incoming customer communications and notify the incident commander if new groups of customers become impacted. After the incident, they ensure the post-mortem gets sent out.

Our recommendation: make it clear from the beginning who has what role in an incident. Even if these people have the bandwidth to help with other areas of the incident, they should respond to these primary objectives first and delegate other tasks where necessary.

Preparation

With a lean team, any time saved during an incident means a lot. Figuring out the right way to wordsmith an announcement can take up precious time in the heat of an incident.

Decide on boilerplate language ahead of time and save it in a template somewhere. Use it to plug in the relevant details during an incident when you need it.

Here is one incident template we use here for our own status page:

"The site is currently experiencing a higher than normal amount of load, and may be causing pages to be slow or unresponsive. We're investigating the cause and will provide an update as soon as possible.”

This language is very simple and generic, and can be deployed as-is in a lot of cases where this is all we know. We can also amend the language to add more relevant details if we have them. For example:

“The site is currently experiencing a higher than normal amount of load due to an incident with one of our larger customers. This is causing about 50% of pages to be unresponsive. We're investigating the cause and will provide an update as soon as possible.”

You should also define your communications channels during an incident. While we obviously recommend Statuspage, there are a lot of tools you can use: Twitter, email, and company blog, as examples. Just make sure you’re clear where you will be posting messages.

During the incident

Once the incident begins, we recommend these three “golden rules” which are worth keeping in mind during the incident.

Early

It’s important to communicate as soon as there is any sign that the incident is impacting customers. Get a message posted as early as possible. It doesn’t have to be perfect. This message serves to reassure users that you’re aware of the issue and actively looking into it. This will also slow down the flood of support tickets and inbound messaging you’re sure to receive during incidents.

Often

When you’re heads-down working on an incident, it can be easy to let regular updates slide. But these long gaps between updates can cause uncertainty and anxiety for your customers. They can start to expect the worst. Even if you’re just updating to say that you’re still investigating the matter, that’s better than no communication. Bonus points if you give an estimate on when next comms will be (and stick to it).

Here’s an example a 2016 HipChat incident.

Precision

In your messaging during the incident, be as precise as you can be without guessing or giving non-committal answers.

Instead of:

“We think we know what’s going on but we need more time.”

Try:

“We’re still working to verify the root cause.”

Instead of:

“The problem seems to be database related.”

Try:

“We’re continuing to investigate the problem.”

At first glance this second example may seem counterintuitive. Why leave out the fact that the issue could be database related? Because you aren’t sure yet. Avoiding hedging words like “we think.” Don’t say you “think” you found the root cause. Either you actually have found the cause or you haven’t.

Once you’ve confirmed the cause, then clearly state as much detail as you’re able to.

For example:

“We’ve identified a corruption with our database related to our last deploy. We are currently rolling back that deploy and monitoring results.”

After the Incident

Some of the biggest opportunities for your team come in the moments after the dust settles from an incident. Your team ideally will run a Post Incident Review session to unpack what happened on the technical side. It’s also a great time to build customer trust by letting them know that you’re taking the incident seriously and taking steps to ensure it doesn’t happen again.

An incident post-mortem is meant to be written after the incident and give a big picture update of what happened, how it happened, and what steps the team is taking to ensure it isn’t repeated. Here are our post-mortem rules.

Empathize

Apologize for the inconvenience, thank customers for their patience, and ensure you’re working on a fix.

Be personal

We see it all the time where teams depersonalize themselves in an effort to seem professional or official. This leads to a cold, distant tone in post-mortems that doesn’t build trust.

Use active voice and “we” pronouns to tell your story. Steer away from words that are overly academic or corporate sounding when simple ones will do.

Instead of:

“Remediation applications on the new load balancer configurations are finalized.”

Try:

“We’ve completed the configuration on the new load balancer.”

Details inspire confidence

People have a good sense for when you’re using a lot of words but not really saying anything. Details are the way to keep your post-mortem from sounding like a lot of hot air.

Here’s an example from a post-mortem Facebook engineers posted after a 2010 incident.

Consider this paragraph:

“Today we made a change to the persistent copy of a configuration value that was interpreted as invalid. This meant that every single client saw the invalid value and attempted to fix it. Because the fix involves making a query to a cluster of databases, that cluster was quickly overwhelmed by hundreds of thousands of queries a second.”

Likely that’s more technical of an explanation than most readers will need. The ones who want this level of detail will appreciate it. The ones who don’t will at least recognize that you’re going above and beyond to explain what happened. A lot of teams worry about being too technical in their messaging and instead wind up sending watered-down communications. Opt for specific details instead.

Close the loop

The post-mortem is your chance to have the last word in an incident. Leave the reader with a sense of trust and confidence by laying out clearly what you’re doing to keep this from happening again.

Here’s an example from a Twilio post-mortem:

“In the process of resolving the incident, we replaced the original redis cluster that triggered the incident. The incorrect configuration for redis-master was identified and corrected. As a further preventative measure, Redis restarts on redis-master are disabled and future redis-master recoveries will be accomplished by pivoting a slave.

The simultaneous loss of in-flight balance data and the ability to update balances also exposed a critical flaw in our auto-recharge system. It failed dangerously, exposing customer accounts to incorrect charges and suspensions. We are now introducing robust fail-safes, so that if billing balances don’t exist or cannot be written, the system will not suspend accounts or charge credit cards. Finally, we will be updating the billing system to validate against our double-bookkeeping databases in real-time.”

Notice how specific this is with outlining what went wrong and exactly what the team is putting in place to keep the problem from repeating.

Even though users today expect 24/7 services that are always up, people are tolerant of outages. We’ve heard a lot of stories about outages over the years at Statuspage, nobody ever went out of business by being too transparent or communicative during an incident. Consider the kind of information and transparency you’d like to receive from the products and vendors you use, and try to treat your users the way you’d like to be treated.

Looking to go even deeper on incident comms exercises? Check out our recent Team Playbook plays on incident response values and incident response communications.

Blake Thorne is a Product Marketing Manager at Statuspage, which helps teams big and small with incident communications. He can be reached at bthorne@atlassian.com or on Twitter.


PHPWeekly April 26th 2018

Published 26 Apr 2018 by in PHP Weekly Archive Feed.

PHPWeekly April 26th 2018
Curated news all about PHP.  Here's the latest edition
PHP Weekly 26th April 2018
A very warm welcome to you out there in the PHP community, and thank you for joining us :)

What makes PHP popular? After seeing this question pop up on a forum recently, Eric Barnes tells us why he started using PHP back in the early 2000s.

Also this week we learn how to retrieve YouTube video thumbnails in PHP without using the YouTube api.

This months php[podcast] looks at the April edition of php[architect] magazine Testing in Practice.

Plus we have an article on creating a WordPress plugin.

And finally, the International PHP Conference, the worlds first PHP conference, takes place over 5 days this June, in Berlin. Tickets are on sale now.

Enjoy your weekend,

Cheers
Ade and Katie
 

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.

Articles

What Makes PHP Popular?
I saw this question pop up on a forum and I know every PHP developer has their own reasons, it made me think of why I initially picked PHP way back in the early 2000’s when v4.3 was king.

Atlas.ORM “Cassini” (v3) Early-Access Alpha Release
For those of you who don’t know, Atlas is an ORM for your persistence model, not your domain model. Atlas 1 “Albers” (for PHP 5.6) was released in April 2017. Atlas 2 “Boggs” (for PHP 7.1) came out in October 2017. And now, in April 2018, we have an early-access release of Atlas 3 “Cassini”, the product of several lessons from a couple of years of use.

12 Best Contact Form PHP Scripts
Contact forms are a must have for every website. They encourage your site visitors to engage with you while potentially lowering the amount of spam you get. Whether your need is for a simple three-line contact form or a more complex one that offers loads of options and functions, you’re sure to find the right PHP contact form here in our 12 Best Contact Form PHP Scripts on CodeCanyon.

Tutorials and Talks

How to Download YouTube Video Thumbnails Using PHP
The following post shows how you can retrieve any YouTube video thumbnails in PHP without using the YouTube api. I needed this for a onetime solution to get some thumbnails for a few videos and using the api seemed a little time consuming. So here goes.

Flexible Heredoc and Nowdoc Coming to PHP 7.3
Updates to the Heredoc and Nowdoc syntaxes proposed in a php.net RFC have been made for the upcoming PHP 7.3 release. The updates focus on improving look and readability.

Implementing Abstract Classes and Interfaces with Traits
There are some very helpful ways to organise classes and objects in PHP. Interfaces, Traits, and Abstract Classes can work together to make code that follows the rules of abstraction. Using these OOP features in PHP allows us to create code that is easy to extend and maintain, saving time and money down the road.  Let’s look at what these structures are as well as how and when abstract classes, traits, and interfaces can be useful in Object Oriented PHP.

JWT Authentication For Lumen 5.6
Recently I have been tinkering with Vue.js to get a taste of it and I decided to create a quick project to get my hands dirty. I decided to create a blog with authentication etc. My main focus was on the front-end so I decided to quickly bootstrap an application in Lumen because of its simplicity and almost zero-configuration development. For the authentication, I decided to go with JWT and this post is going to be a quick write-up on how I integrated that and how anyone can integrate JWT authentication in their APIs. I hope you are excited so lets get started.

Testing in Laravel
Irrespective of the application you're dealing with, testing is an important and often overlooked aspect that you should give the attention it deserves. Today, we're going to discuss it in the context of the Laravel web framework.

QueryFilter: A Model Filtering Concept
Having a clean code with single responsibility is important, and doing that for model filtering can be easy and very powerful. Believe me.

Make Your Chatbots GDPR Compliant
Only one month left until the GDPR will take effect and people are already freaking out. If you haven't made yourself familiar with this topic, you need to do it now! This article will give you a summary of what you need to know and provide you with steps to make your chatbots GDPR compliant.

Combing Legacy Code String by String
I find it very curious that legacy (PHP) code often has the following characteristics: Classes with the name of a central domain concept have grown too large; Methods in these classes have become very generic.

Class Property and Method Visibility in PHP
Have you ever been confused by the difference between “private” and “protected” properties in PHP? Do you have to declare all your properties as “public”? What would you use a “private” method for? If you’ve ever had questions about method or property visibility in PHP, read on. I hope this post will improve your understanding of it.

What's New and Changing in PHP 7.3
This is a live document (until PHP 7.3 is released as generally available) on changes and new features to expect in PHP 7.3, with code examples, relevant RFCs, and the rationale behind them, in their chronological order.

How to Slowly Turn your Symfony Project to Legacy with Action Injection
The other day I saw the question on Reddit about Symfony's controller action dependency injection. More people around me are hyped about this new feature in Symfony 3.3 that allows to autowire services via action argument typehints. It's new, it's cool and no one has a bad experience with it. The ideal candidate for any code you write today. 

Leverage Eloquent To Prepare Your URLs
It’s not uncommon to have tens, if not hundreds of views in a Laravel application. Something that soon gets out of hand is the various references to routes. If for whatever reason we have to make a change to either the route alias or default query string values you’ll soon find yourself doing mass string replacements across your entire application which brings the risk of breakage within many files.

How We Create WordPress Plugins: From Idea To Release
If you’ve been reading our blog for a while, you’ve probably seen some of our tutorials on developing plugins using different technologies like React and Vue. But when not writing examples for blog posts, we rarely if at all dive into creating a new plugin – there’s a lot more that goes into it before we write a single line of code. In this week’s post, we’ll be taking a look at everything we do to create a new product or WordPress plugin. We don’t adhere strictly to any specific software development process, but the method we’re using currently seems to work well.

Implementing Abstract Classes and Interfaces with Traits
There are some very helpful ways to organise classes and objects in PHP. Interfaces, Traits, and Abstract Classes can work together to make code that follows the rules of abstraction. Using these OOP features in PHP allows us to create code that is easy to extend and maintain, saving time and money down the road.  Let’s look at what these structures are as well as how and when abstract classes, traits, and interfaces can be useful in Object Oriented PHP.
News and Announcements

International PHP Conference - June 4-8th 2018, Berlin
The International PHP Conference is the world’s first PHP conference and stands since more than a decade for top-notch pragmatic expertise in PHP and web technologies. Internationally renowned experts from the PHP industry meet up with PHP users and developers from large and small companies. Here is the place where concepts emerge and ideas are born - the IPC signifies knowledge transfer at highest level. All delegates of the International PHP Conference have, in addition to PHP program, free access to the entire range of the webinale taking place at the same time. Tickets are on sale now.

Dutch PHP Conference - June 7-9th 2018, Amsterdam
Ibuildings is proud to organise the eleventh Dutch PHP Conference on June 8th and 9th, plus a pre-conference tutorial day on June 7. Both programs will be completely in English so the only Dutch thing about it is the location. Keywords for these days: Know-how, Technology, Best Practices, Networking, Tips & Tricks. The target audience for this conference are PHP and Mobile Web Developers of all levels, software architects, and even managers. Beginners will find many talks aimed at helping them become better developers, while more experienced developers will come away inspired to do even better and with knowledge about the latest tools and methodologies. Tickets are on sale now.

WavePHP Conference - September 19th-21st 2018, San Diego
WavePHP Conference is bringing the wonderful PHP community to the Southwest United States. Designed to be a conference for both professionals and hobbyists alike. Held in beautiful southern California's San Diego County the area has ideal weather and tons of activities. Blind Early Bird Tickets are on sale now until the weekend.

Podcasts

php[podcast] Episode 9: Testing in Practice
Our hosts, Eric van Johnson and John Congdon dive into Testing in Practice and the April 2018 issue of php[architect] magazine. Share your thoughts on the topics covered and leave a comment below.

Full Stack Radio Podcast Episode 87: Chris Fritz - Vue.js Anti-Patterns (and How to Avoid Them)
In this episode, Adam talks to Chris Fritz about common mistakes people make when designing Vue.js applications, and better ways to solve the same problems.

The Changelog Podcast #294: Code Cartoons, Rust, and WebAssembly
Lin Clark joined the show to talk about Code Cartoons, her work at Mozilla in the emerging technologies group, Rust, Servo, and WebAssembly (aka Wasm), the Rust community's big goal in 2018 for Rust to become a web language (thanks in part to Wasm), passing objects between Rust and JavaScript, Rust libraries depending on JavaScript packages and vice versa, Wasm ES Modules, and Lin's upcoming keynote at Fluent on the parallel future of the browser.

PHP Ugly Podcast #102: We Address Congress
Topics include tips for working remote and Todoist. 

North Meets South Web Podcast Episode 44: ANZACs, Queues, and File Uploads
Jake and Michael return in an irregular time slot to discuss ANZACs, scaling Laravel, queues, handling file uploads, and more!

Post Status Draft Podcast - Contextualised Learning In Or Around WordPress
In this episode, the dynamic Brian duo discuss the highly-anticipated return of WordSesh, the different ways in which we all learn the same, and some of the problems we face in skill building.

Reading and Viewing

Surface Book 2 For Development
Over the past month and a half I've been trying to fully switch to a new work machine. Instead of my trusty MacBook Pro, I've mostly been working with a Microsoft Surface Book 2. Here's my lessons of this period.

Kinsta Kingpin: Interview with James Laws
This is our recent interview with him, as part of our Kinsta Kingpin series.

Cloudways Interview: The Journey Of Anthony Hortin Running A WordPress Development Agency
This week we interview Anthony Hortin, a WordPress web design veteran. Hailing from Melbourne, Anthony is the owner of Maddison Designs, a web development and design agency that specialises in providing customised WordPress sites using the latest web design trends and practices.

DrupalCon North America Location Survey Results
Over the past few years, we’ve been listening to the community ask for explanation as to why we haven’t had any DrupalCon North America locations outside of the United States - after all it’s called DrupalCon North America, not DrupalCon U.S.A. This isn’t something we’ve taken lightly or ignored. DrupalCon North America is a major funding source for the Drupal Association, and in that regard, a major funding source of Drupal.org and the engineering work that keeps the code accessible and available for everyone.

PHP Digest #16: News and Tools
We know you missed our PHP digest. Finally, the 16th issue is hot off the press! Read about an open analog of Google Analytics, powerful library to describe business processes, simple and easy to use catch-all SMTP mail server,  static code analyser to search for possible errors, and more. Enjoy!

New Course: Code a Single-Page App With Laravel and Vue.js
Want to add more responsiveness and interactivity to your Laravel app? Try using the cutting-edge Vue.js JavaScript framework to create a fluid, responsive single-page app. You'll learn how to do it from start to finish in our comprehensive new course, Code a Single-Page App With Laravel and Vue.js.

Jobs

Senior PHP Front End Developer - Limassol, Cyprus
Cooperating closely with the design team and content writers to implement any necessary changes to multiple company websites. Developing and testing new features. Overseeing the correct functionality of the multiple company websites and solving any problems these websites encounter and/or liaising with the appropriate expert. Performing routine site maintenance as needed and detecting errors. Staying abreast of the latest developments in his/her field, emerging technologies and services that may enhance the web experience. Making relevant recommendations to the PHP FED team. Assisting other departments with any queries related to PHP FED team responsibilities.

Senior PHP Back End Developers - Limassol, Cyprus
Gathering requirements, designing and implementing new features/projects. Maintaining and refactoring existing web applications such as the Company’s payment gateway. Resolving support tickets for IT related issues. Researching and integrating new web technologies. Collaborating with other departments or IT staff members.

German Speaking PHP Developer (m/f)
You’re proud to call yourself a nerd and consider programming in PHP to be more than just a job? You’d like to help us make our shop better and faster while simultaneously providing our customers with an unparalleled and flawless shopping experience? If you feel like this describes you and also happen to have a weakness for new technology, you’re just the person we’re looking for!




Do you have a position that you would like to fill? PHP Weekly is ideal for targeting developers and the cost is only $50/week for an advert.  Please let me know if you are interested by emailing me at katie@phpweekly.com

Interesting Projects, Tools and Libraries

php-business-time
"Business time" logic in PHP (aka "business hours", "working days" etc). This can be useful for calculating shipping dates, for example.

phpactor
This project aims to provide heavy-lifting refactoring and introspection tools which can be used standalone or as the backend for a text editor to provide intelligent code completion.

server-status
Simple, modern looking server status page with administration and some nice features, that can run even on shared webhosting.

spftoolbox
A Javascript and PHP app to look up DNS records such as SPF, MX, Whois, and more.

firefly-iii
"Firefly III" is a (self-hosted) manager for your personal finances.

kahlan
A full-featured Unit & BDD test framework a la RSpec/JSpec which uses a describe-it syntax and moves testing in PHP one step forward.

wondercms
Fast, responsive, single user flat file CMS. Built with PHP and jQuery.

viber-bot-php
PHP bot interface to work with Viber API.

photo-blog
The Photo Blog Application based on Laravel 5 and Vue.js 2 + Prerender.

cypht
Lightweight open source webmail written in PHP and JavaScript.

panthere
A browser testing and web crawling library for PHP and Symfony.

platform
RAD platform for building a business application using the Laravel framework.

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.
 

So, how did you like this issue?

Like us on FacebookFollow us on Twitter
We are still trying to grow our list. If you find PHP Weekly useful please tweet about us! Thanks.
Also, if you have a site or blog related to PHP then please link through to our site.

unsubscribe from this list | update subscription preferences 
 
Copyright © 2018 PHP Weekly, All rights reserved.
Email Marketing Powered by MailChimp

GDPR, FastMail and you

Published 24 Apr 2018 by Bron Gondwana in FastMail blog.

Following on from December’s blog post our executive team has been hard at work for the past few months making preparations for the upcoming GDPR. Current customers, no matter where you are located, should expect to receive notices soon about changes.

GDPR has been a great opportunity for us to confirm everything we believe about our products. Your data is yours, and we should be able to clearly articulate how we touch it.

What is GDPR?

General Data Protection Regulation (GDPR) is a new set of rules from the European Union (EU) and sets a standard for how companies use and protect people’s personal data. It comes into effect on May 25, 2018.

While aimed specifically at EU citizens, we feel it aligns closely with our own privacy values (you are our customer, not the product) and we will be providing the same transparency and protection for all our customers, regardless of where they live or their country of citizenship.

It covers:

What does “personal data” mean?

Anything that can help identify an individual is personal data.

Some examples: your email address, your IP address, your physical address, appointments you might have coming up, where you work, who your family members are.

There are some obvious personal information that we collect for users of any of our products (FastMail, Topicbox, Pobox, Listbox): your email address, billing information, and IP address. But any email content is also considered personal information because it can contain anything. We can’t know what you might have put in your email, so we must treat it as personal data.

What this regulation means for you

FastMail is serious about protecting your privacy. It is one of our core values. We believe that security is more than a checkbox.

We will be updating our policies before GDPR comes into effect, and we continue our commitment to plain language and a clear outline of what you can expect from us and any data processing vendors we use.

You control your data, when it comes to email, contacts and calendars. We provide processing of that data in order to supply you with an email service. Our job is to execute your wishes faithfully, efficiently, and with low friction so you can get on with your day.

We process your data to ensure we can deliver your mail, to keep your mailbox free from spam and to make it easy to search.

Our support team do not have access to your email content beyond what’s minimally necessary to supply you with service, unless you explicitly provide consent for the purposes of resolving a support issue.

We periodically profile data in aggregate to test and validate the design of software to ensure we can handle size, scale, and throughput of our customer base.

There are only two ways we use your information for anything other than directly providing the email service you pay us for:

  1. If you opt-in to our newsletters, you occasionally will receive information about changes to our service, company news, or surveys to help us find out how we can help our customers.
  2. Information in aggregate for marketing purposes to better understand the people interested in our service and how we can better meet their needs.

How is FastMail preparing for GDPR?

Because of our longstanding commitment to your privacy, this is a problem we’ve given a lot of thought. We are continuing to review our processes and data to make sure that the only staff who have access to your information are the ones who need it in order to provide you with the service you pay us for.

We are ensuring that in meeting our obligations, we don't get in your way: our service will remain fast, and easy to use. We believe that your privacy is a right, not a chore.

You have the “right to be forgotten” under the GDPR. This means you can request that we delete all your personal data off our platform, without exposing a potential security risk for a malicious attack. You can request your account be removed off our platform and the data will be cleared after a waiting period (just in case a hacker was the one who closed your account).

Our work through open standards means your data has always been portable and you can download it at any time.

We are working with our vendors who help us provide our service to ensure that they, too, are upholding the GDPR and updating our contracts as necessary.

We are preparing Data Protection Agreements (DPAs) for customers to sign, where needed.

We are appointing a Privacy Officer (and you can contact them at privacy@fastmail.com). Their role is to manage FastMail’s compliance with the GDPR regulation, with the help of an externally appointed Data Protection Officer.

Stay tuned

Customers can expect a policy update soon. More information will be published on our blog and help pages as we complete the steps necessary to guarantee compliance.

If you have any questions or concerns not addressed here, please contact privacy@fastmail.com.


Together

Published 20 Apr 2018 by Matthew Roth in code.flickr.com.

Flickr is excited to be joining SmugMug!

We’re looking forward to some interesting and challenging engineering projects in the next year, and would love to have more great people join the team!

We want to talk to people who are interested in working on an inclusive, diverse team, building large-scale systems that are backing a much-loved product.

You can reach us by email at: iwanttowork@flickr.com

Read our announcement blog post and our extended Q&A for more details.

~The Flickr Team


2017 Year Review

Published 20 Apr 2018 by addshore in Addshore.

2017 has been a great year with continued work at WMDE on both technical wishes projects and also Wikibase / Wikidata related areas. Along the way I shared a fair amount of this through this blog, although not as much as I would have liked. Hopefully I’ll be slightly more active in 2018. Here are some fun stats:

Top 5 posts by page views in 2017 were:

  1. Guzzle 6 retry middleware
  2. Misled by PHPUnit at() method
  3. Wikidata Map July 2017
  4. Add Exif data back to Facebook images
  5. Github release download count &#8211; Chrome Extension

To make myself feel slightly better we can have a look at github and the apparent 1,203 contributions in 2017:

The post 2017 Year Review appeared first on Addshore.


The 2nd UK AtoM user group meeting

Published 20 Apr 2018 by Jenny Mitcham in Digital Archiving at the University of York.

I was pleased to be able to host the second meeting of the UK AtoM user group here in York at the end of last week. AtoM (or Access to Memory) is the Archival Management System that we use here at the Borthwick Institute and it seems to be increasing in popularity across the UK.

We had 18 attendees from across England, Scotland and Wales representing both archives and service providers. It was great to see several new faces and meet people at different stages of their AtoM implementation.

We started off with introductions and everyone had the chance to mention one recent AtoM triumph and one current problem or challenge. A good way to start the conversation and perhaps a way of considering future development opportunities and topics for future meetings.

Here is a selection of the successes that were mentioned:

  • Establishing a search facility that searches across two AtoM instances
  • Getting senior management to agree to establishing AtoM
  • Getting AtoM up and running
  • Finally having an online catalogue
  • Working with authority records in AtoM
  • Working with other contributors and getting their records displaying on AtoM
  • Using the API to drive another website
  • Upgrading to version 2.4
  • Importing legacy EAD into AtoM
  • Uploading finding aids into AtoM 2.4
  • Adding 1000+ urls to digital resources into AtoM using a set of SQL update statements

...and here are some of the current challenges or problems users are trying to solve:
  • How to bar code boxes - can this be linked to AtoM?
  • Moving from CALM to AtoM
  • Not being able to see the record you want to link to when trying to select related records
  • Using the API to move things into an online showcase
  • Advocacy for taking the open source approach
  • Working out where to start and how best to use AtoM
  • Sharing data with the Archives Hub
  • How to record objects alongside archives
  • Issues with harvesting EAD via OAI-PMH
  • Building up the right level of expertise to be able to contribute code back to AtoM
  • Working out what to do when AtoM stops working
  • Discovering that AtoM doesn't enforce uniqueness in identifiers for archival descriptions

After some discussion about some of the issues that had been raised, Louise Hughes from the University of Gloucestershire showed us her catalogue and talked us through some of the decisions they had made as they set this up. 

The University of Gloucestershire's AtoM instance

She praised the digital object functionality and has been using this to add images and audio to the archival descriptions. She was also really happy with the authority records, in particular, being able to view a person and easily see which archives relate to them. She discussed ongoing work to enable records from AtoM to be picked up and displayed within the library catalogue. She hasn't yet started to use AtoM for accessioning but hopes to do so in the future. Adopting all the functionality available within AtoM needs time and thought and tackling it one step at a time (particularly if you are a lone archivist) makes a lot of sense.

Tracy Deakin from St John's College, Cambridge talked us through some recent work to establish a shared search page for their two institutional AtoM instances. One holds the catalogue of the college archives and the other is for the Special Collections Library. They had taken the decision to implement two separate instances of AtoM as they required separate front pages and the ability to manage the editing rights separately. However, as some researchers will find it helpful to search across both instances a search page has been developed that accesses the Elasticsearch index of each site in order to cross search.

The interface for a shared search across St John's College AtoM sites

Vicky Phillips from the National Library of Wales talked us through their processes for upgrading their AtoM instance to version 2.4 and discussed some of the benefits of moving to 2.4. They are really happy to have the full width treeview and the drag and drop functionality within it.

The upgrade has not been without it's challenges though. They have had to sort out some issues with invalid slugs, ongoing issues due to the size of some of their archives (they think the XML caching functionality will help with this) and sometimes find that MySQL gets overwhelmed with the number of queries and needs a restart. They still have some testing to do around bilingual finding aids and have also been working on testing out the new functionality around OAI PMH harvesting of EAD.

Following on from this I gave a presentation on upgrading AtoM to 2.4 at the Borthwick Institute. We are not quite there yet but I talked about the upgrade plan and process and some decisions we have made along the way. I won't say any more for the time being as I think this will be the subject of a future blog post.

Before lunch my colleague Charles Fonge introduced VIAF (Virtual International Authority File) to the group. This initiative will enable Authority Records created by different organisations across the world to be linked together more effectively. Several institutions may create an authority record about the same individual and currently it is difficult to allow these to be linked together when data is aggregated by services such as The Archives Hub. It is worth thinking about how we might use VIAF in an AtoM context. At the moment there is no place to store a VIAF ID in AtoM and it was agreed this would be a useful development for the future.

After lunch Justine Taylor from the Honourable Artillery Company introduced us to the topic of back up and disaster recovery of AtoM. She gave the group some useful food for thought, covering techniques and the types of data that would need to be included (hint: it's not solely about the database). This was particularly useful for those working in small institutions who don't have an IT department that just does all this for them as a matter of course. Some useful and relevant information on this subject can be found in the AtoM documentation.

Max Communications are a company who provide services around AtoM. They talked through some of their work with institutions and what services they can offer.  As well as being able to provide hosting and support for AtoM in the UK, they can also help with data migration from other archival management systems (such as CALM). They demonstrated their crosswalker tool that allows archivists to map structured data to ISAD(G) before import to AtoM.

They showed us an AtoM theme they had developed to allow Vimeo videos to be embedded and accessible to users. Although AtoM does have support for video, the files can be very large in size and there are large overheads involved in running a video server if substantial quantities are involved. Keeping the video outside of AtoM and managing the permissions through Vimeo provided a good solution for one of their clients.

They also demonstrated an AtoM plugin they had developed for Wordpress. Though they are big fans of AtoM, they pointed out that it is not the best platform for creating interesting narratives around archives. They were keen to be able to create stories about archives by pulling in data from AtoM where appropriate.

At the end of the meeting Dan Gillean from Artefactual Systems updated us (via Skype) about the latest AtoM developments. It was really interesting to hear about the new features that will be in version 2.5. Note, that none of this is ever a secret - Artefactual make their road map and release notes publicly available on their wiki - however it is still helpful to hear it enthusiastically described.

The group was really pleased to hear about the forthcoming audit logging feature, the clever new functionality around calculating creation dates, and the ability for users to save their clipboard across sessions (and share them with the searchroom when they want to access the items). Thanks to those organisations that are funding this exciting new functionality. Also worth a mention is the slightly less sexy, but very valuable work that Artefactual is doing behind the scenes to upgrade Elasticsearch.

Another very useful meeting and my thanks go to all who contributed. It is certainly encouraging to see the thriving and collaborative AtoM community we have here in the UK.

Our next meeting will be in London in the autumn.

The 2nd UK AtoM user group meeting

Published 20 Apr 2018 by Jenny Mitcham in Digital Archiving at the University of York.

I was pleased to be able to host the second meeting of the UK AtoM user group here in York at the end of last week. AtoM (or Access to Memory) is the Archival Management System that we use here at the Borthwick Institute and it seems to be increasing in popularity across the UK.

We had 18 attendees from across England, Scotland and Wales representing both archives and service providers. It was great to see several new faces and meet people at different stages of their AtoM implementation.

We started off with introductions and everyone had the chance to mention one recent AtoM triumph and one current problem or challenge. A good way to start the conversation and perhaps a way of considering future development opportunities and topics for future meetings.

Here is a selection of the successes that were mentioned:

  • Establishing a search facility that searches across two AtoM instances
  • Getting senior management to agree to establishing AtoM
  • Getting AtoM up and running
  • Finally having an online catalogue
  • Working with authority records in AtoM
  • Working with other contributors and getting their records displaying on AtoM
  • Using the API to drive another website
  • Upgrading to version 2.4
  • Importing legacy EAD into AtoM
  • Uploading finding aids into AtoM 2.4
  • Adding 1000+ urls to digital resources into AtoM using a set of SQL update statements

...and here are some of the current challenges or problems users are trying to solve:
  • How to bar code boxes - can this be linked to AtoM?
  • Moving from CALM to AtoM
  • Not being able to see the record you want to link to when trying to select related records
  • Using the API to move things into an online showcase
  • Advocacy for taking the open source approach
  • Working out where to start and how best to use AtoM
  • Sharing data with the Archives Hub
  • How to record objects alongside archives
  • Issues with harvesting EAD via OAI-PMH
  • Building up the right level of expertise to be able to contribute code back to AtoM
  • Working out what to do when AtoM stops working
  • Discovering that AtoM doesn't enforce uniqueness in identifiers for archival descriptions

After some discussion about some of the issues that had been raised, Louise Hughes from the University of Gloucestershire showed us her catalogue and talked us through some of the decisions they had made as they set this up. 

The University of Gloucestershire's AtoM instance

She praised the digital object functionality and has been using this to add images and audio to the archival descriptions. She was also really happy with the authority records, in particular, being able to view a person and easily see which archives relate to them. She discussed ongoing work to enable records from AtoM to be picked up and displayed within the library catalogue. She hasn't yet started to use AtoM for accessioning but hopes to do so in the future. Adopting all the functionality available within AtoM needs time and thought and tackling it one step at a time (particularly if you are a lone archivist) makes a lot of sense.

Tracy Deakin from St John's College, Cambridge talked us through some recent work to establish a shared search page for their two institutional AtoM instances. One holds the catalogue of the college archives and the other is for the Special Collections Library. They had taken the decision to implement two separate instances of AtoM as they required separate front pages and the ability to manage the editing rights separately. However, as some researchers will find it helpful to search across both instances a search page has been developed that accesses the Elasticsearch index of each site in order to cross search.

The interface for a shared search across St John's College AtoM sites

Vicky Phillips from the National Library of Wales talked us through their processes for upgrading their AtoM instance to version 2.4 and discussed some of the benefits of moving to 2.4. They are really happy to have the full width treeview and the drag and drop functionality within it.

The upgrade has not been without it's challenges though. They have had to sort out some issues with invalid slugs, ongoing issues due to the size of some of their archives (they think the XML caching functionality will help with this) and sometimes find that MySQL gets overwhelmed with the number of queries and needs a restart. They still have some testing to do around bilingual finding aids and have also been working on testing out the new functionality around OAI PMH harvesting of EAD.

Following on from this I gave a presentation on upgrading AtoM to 2.4 at the Borthwick Institute. We are not quite there yet but I talked about the upgrade plan and process and some decisions we have made along the way. I won't say any more for the time being as I think this will be the subject of a future blog post.

Before lunch my colleague Charles Fonge introduced VIAF (Virtual International Authority File) to the group. This initiative will enable Authority Records created by different organisations across the world to be linked together more effectively. Several institutions may create an authority record about the same individual and currently it is difficult to allow these to be linked together when data is aggregated by services such as The Archives Hub. It is worth thinking about how we might use VIAF in an AtoM context. At the moment there is no place to store a VIAF ID in AtoM and it was agreed this would be a useful development for the future.

After lunch Justine Taylor from the Honourable Artillery Company introduced us to the topic of back up and disaster recovery of AtoM. She gave the group some useful food for thought, covering techniques and the types of data that would need to be included (hint: it's not solely about the database). This was particularly useful for those working in small institutions who don't have an IT department that just does all this for them as a matter of course. Some useful and relevant information on this subject can be found in the AtoM documentation.

Max Communications are a company who provide services around AtoM. They talked through some of their work with institutions and what services they can offer.  As well as being able to provide hosting and support for AtoM in the UK, they can also help with data migration from other archival management systems (such as CALM). They demonstrated their crosswalker tool that allows archivists to map structured data to ISAD(G) before import to AtoM.

They showed us an AtoM theme they had developed to allow Vimeo videos to be embedded and accessible to users. Although AtoM does have support for video, the files can be very large in size and there are large overheads involved in running a video server if substantial quantities are involved. Keeping the video outside of AtoM and managing the permissions through Vimeo provided a good solution for one of their clients.

They also demonstrated an AtoM plugin they had developed for Wordpress. Though they are big fans of AtoM, they pointed out that it is not the best platform for creating interesting narratives around archives. They were keen to be able to create stories about archives by pulling in data from AtoM where appropriate.

At the end of the meeting Dan Gillean from Artefactual Systems updated us (via Skype) about the latest AtoM developments. It was really interesting to hear about the new features that will be in version 2.5. Note, that none of this is ever a secret - Artefactual make their road map and release notes publicly available on their wiki - however it is still helpful to hear it enthusiastically described.

The group was really pleased to hear about the forthcoming audit logging feature, the clever new functionality around calculating creation dates, and the ability for users to save their clipboard across sessions (and share them with the searchroom when they want to access the items). Thanks to those organisations that are funding this exciting new functionality. Also worth a mention is the slightly less sexy, but very valuable work that Artefactual is doing behind the scenes to upgrade Elasticsearch.

Another very useful meeting and my thanks go to all who contributed. It is certainly encouraging to see the thriving and collaborative AtoM community we have here in the UK.

Our next meeting will be in London in the autumn.

Back to the classroom - the Domesday project

Published 20 Apr 2018 by Jenny Mitcham in Digital Archiving at the University of York.

Yesterday I was invited to speak to a local primary school about my job. The purpose of the event was to inspire kids to work in STEM subjects (science, technology, engineering and maths) and I was faced with an audience of 10 and 11 year old girls.

One member of the audience (my daughter) informed me that many of the girls were only there because they had been bribed with cake.

This could be a tough gig!

On a serious note, there is a huge gender imbalance in STEM careers with women only making up 23% of the workforce in core STEM occupations. In talking to the STEM ambassador who was at this event, it was apparent that recruitment in engineering is quite hard, with not enough boys OR girls choosing to work in this area. This is also true in my area of work and is one of the reasons we are involved in the "Bridging the Digital Gap" project led by The National Archives. They note in a blog post about the project that:

"Digital skills are vital to the future of the archives sector ...... if archives are going to keep up with the pace of change, they need to attract members of the workforce who are confident in using digital technology, who not only can use digital tools, but who are also excited and curious about the opportunities and challenges it affords."

So why not try and catch them really young and get kids interested in our profession?

There were a few professionals speaking at the event and subjects were varied and interesting. We heard from someone who designed software for cars (who knew how many different computers are in a modern car?), someone who had to calculate exact mixes of seed to plant in Sites of Special Scientific Interest in order to encourage the right wild birds to nest there, a scientist who tested gelatin in sweets to find out what animal it was made from, an engineer who uses poo to heat houses....I had some pretty serious competition!

I only had a few minutes to speak so my challenge was to try and make digital preservation accessible, interesting and relevant in a short space of time. You could say that this was a bit of an elevator pitch to school kids.

Once I got thinking about this I had several ideas of different angles I could take.

I started off looking at the Mount School Archive that is held at the Borthwick. This is not a digital archive but was a good introduction to what archives are all about and why they are interesting and important. Up until 1948 the girls at this school created their own school magazine that is beautifully illustrated and gives a fascinating insight into what life was like at the school. I wanted to compare this with how schools communicate and disseminate information today and discuss some of the issues with preserving this more modern media (websites, twitter feeds, newsletters sent to parents via email).

Several powerpoint slides down the line I realised that this was not going to be short and snappy enough.

I decided to change my plans completely and talk about something that they may already know about, the Domesday Book.

I began by asking them if they had heard of the Domesday Book. Many of them had. I asked what they knew about it. They thought it was from 1066 (not far off!), someone knew that it had something to do with William the Conqueror, they guessed it was made of parchment (and they knew that parchment was made of animal skin). They were less certain of what it was actually for. I filled in the gaps for them.

I asked them whether they thought this book (that was over 900 years old) could still be accessed today and they weren't so sure about this. I was able to tell them that it is being well looked after by The National Archives and can still be accessed in a variety of ways. The main barrier to understanding the information is that it is written in Latin.

I talked about what the Domesday Book tells us about our local area. A search on Open Domesday tells us that Clifton only had 12 households in 1086. Quite different from today!

We then moved forward in time, to a period of history known as 'The 1980's' (a period that the children had recently been studying at school - now that makes me feel old!). I introduced them to the BBC Domesday Project of 1986. Without a doubt one of digital preservation's favourite case studies!

I explained how school children and communities were encouraged to submit information about their local areas. They were asked to include details of everyday life and anything they thought might be of interest to people 1000 years from then. People took photographs and wrote information about their lives and their local area. The data was saved on to floppy disks (what are they?) and posted to the BBC (this was before email became widely available). The BBC collated all the information on to laser disc (something that looks a bit like a CD but with a diameter of about 30cm).

I asked the children to consider the fact that the 900 year old Domesday Book is still accessible and  think about whether the 30 year old BBC Domesday Project discs were equally accessible. In discussion this gave me the opportunity to finally mention what digital archivists do and why it is such a necessary and interesting job. I didn't go into much technical detail but all credit to the folks who actually rescued the Domesday Project data. There is lots more information here.

Searching the Clifton and Rawcliffe area on Domesday Reloaded


Using the Domesday Reloaded website I was then able to show them what information is recorded about their local area from 1986. There was a picture of houses being built, and narratives about how a nearby lake was created. There were pieces written by a local school child and a teacher describing their typical day. I showed them a piece that was written about 'Children's Crazes' which concluded with:

" Another new activity is break-dancing
 There is a place in York where you can
 learn how to break-dance. Break     
 dancing means moving and spinning on
 the floor using hands and body. Body-
 popping is another dance craze where
 the dancer moves like a robot."


Disappointingly the presentation didn't entirely go to plan - my powerpoint only partially worked and the majority of my carefully selected graphics didn't display.

A very broken powerpoint presentation

There was thus a certain amount of 'winging it'!

This did however allow me to make the point that working with technology can be challenging as well as perhaps frustrating and exciting in equal measure!


Back to the classroom - the Domesday project

Published 20 Apr 2018 by Jenny Mitcham in Digital Archiving at the University of York.

Yesterday I was invited to speak to a local primary school about my job. The purpose of the event was to inspire kids to work in STEM subjects (science, technology, engineering and maths) and I was faced with an audience of 10 and 11 year old girls.

One member of the audience (my daughter) informed me that many of the girls were only there because they had been bribed with cake.

This could be a tough gig!

On a serious note, there is a huge gender imbalance in STEM careers with women only making up 23% of the workforce in core STEM occupations. In talking to the STEM ambassador who was at this event, it was apparent that recruitment in engineering is quite hard, with not enough boys OR girls choosing to work in this area. This is also true in my area of work and is one of the reasons we are involved in the "Bridging the Digital Gap" project led by The National Archives. They note in a blog post about the project that:

"Digital skills are vital to the future of the archives sector ...... if archives are going to keep up with the pace of change, they need to attract members of the workforce who are confident in using digital technology, who not only can use digital tools, but who are also excited and curious about the opportunities and challenges it affords."

So why not try and catch them really young and get kids interested in our profession?

There were a few professionals speaking at the event and subjects were varied and interesting. We heard from someone who designed software for cars (who knew how many different computers are in a modern car?), someone who had to calculate exact mixes of seed to plant in Sites of Special Scientific Interest in order to encourage the right wild birds to nest there, a scientist who tested gelatin in sweets to find out what animal it was made from, an engineer who uses poo to heat houses....I had some pretty serious competition!

I only had a few minutes to speak so my challenge was to try and make digital preservation accessible, interesting and relevant in a short space of time. You could say that this was a bit of an elevator pitch to school kids.

Once I got thinking about this I had several ideas of different angles I could take.

I started off looking at the Mount School Archive that is held at the Borthwick. This is not a digital archive but was a good introduction to what archives are all about and why they are interesting and important. Up until 1948 the girls at this school created their own school magazine that is beautifully illustrated and gives a fascinating insight into what life was like at the school. I wanted to compare this with how schools communicate and disseminate information today and discuss some of the issues with preserving this more modern media (websites, twitter feeds, newsletters sent to parents via email).

Several powerpoint slides down the line I realised that this was not going to be short and snappy enough.

I decided to change my plans completely and talk about something that they may already know about, the Domesday Book.

I began by asking them if they had heard of the Domesday Book. Many of them had. I asked what they knew about it. They thought it was from 1066 (not far off!), someone knew that it had something to do with William the Conqueror, they guessed it was made of parchment (and they knew that parchment was made of animal skin). They were less certain of what it was actually for. I filled in the gaps for them.

I asked them whether they thought this book (that was over 900 years old) could still be accessed today and they weren't so sure about this. I was able to tell them that it is being well looked after by The National Archives and can still be accessed in a variety of ways. The main barrier to understanding the information is that it is written in Latin.

I talked about what the Domesday Book tells us about our local area. A search on Open Domesday tells us that Clifton only had 12 households in 1086. Quite different from today!

We then moved forward in time, to a period of history known as 'The 1980's' (a period that the children had recently been studying at school - now that makes me feel old!). I introduced them to the BBC Domesday Project of 1986. Without a doubt one of digital preservation's favourite case studies!

I explained how school children and communities were encouraged to submit information about their local areas. They were asked to include details of everyday life and anything they thought might be of interest to people 1000 years from then. People took photographs and wrote information about their lives and their local area. The data was saved on to floppy disks (what are they?) and posted to the BBC (this was before email became widely available). The BBC collated all the information on to laser disc (something that looks a bit like a CD but with a diameter of about 30cm).

I asked the children to consider the fact that the 900 year old Domesday Book is still accessible and  think about whether the 30 year old BBC Domesday Project discs were equally accessible. In discussion this gave me the opportunity to finally mention what digital archivists do and why it is such a necessary and interesting job. I didn't go into much technical detail but all credit to the folks who actually rescued the Domesday Project data. There is lots more information here.

Searching the Clifton and Rawcliffe area on Domesday Reloaded


Using the Domesday Reloaded website I was then able to show them what information is recorded about their local area from 1986. There was a picture of houses being built, and narratives about how a nearby lake was created. There were pieces written by a local school child and a teacher describing their typical day. I showed them a piece that was written about 'Children's Crazes' which concluded with:

" Another new activity is break-dancing
 There is a place in York where you can
 learn how to break-dance. Break     
 dancing means moving and spinning on
 the floor using hands and body. Body-
 popping is another dance craze where
 the dancer moves like a robot."


Disappointingly the presentation didn't entirely go to plan - my powerpoint only partially worked and the majority of my carefully selected graphics didn't display.

A very broken powerpoint presentation

There was thus a certain amount of 'winging it'!

This did however allow me to make the point that working with technology can be challenging as well as perhaps frustrating and exciting in equal measure!


PHPWeekly April 19th 2018

Published 19 Apr 2018 by in PHP Weekly Archive Feed.

PHPWeekly April 19th 2018
Curated news all about PHP.  Here's the latest edition
PHP Weekly 19th April 2018
Hello to the PHP community, and welcome to PHPweekly.com.

Are you are looking to recruit new staff?
Looking for a high standard of applicant?
Would you like to reach out to the PHP Community to fill your position? 
Where better to advertise your job openings then on phpweekly.com? 

Do you want to entice new talent, or new business, to your business?
How about sponsoring an edition of phpweekly.com?
A stand out advert at the top of our page will catch the eyes of our subscribers.

With our subscriber list nudging 21,000, you could just find exactly who, or what, you are looking for right here.

For more information drop me a line at katie@phpweekly.com.

Cheers
Ade and Katie

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.

Articles

20 Laravel Eloquent Tips and Tricks
Eloquent ORM seems like a simple mechanism, but under the hood, there’s a lot of semi-hidden functions and less-known ways to achieve more with it. In this article, I will show you a few tricks.

The Main Reasons We Use Symfony for Web Application Developments
At Outsourcify we work on projects of varying sizes, from small sites with a few pages to complex business applications. Depending on the case, we recommend different technical solutions (we do a lot of Javascript SPA and Wordpress also), but for the most complex cases, when we have to chose a technology to build large web applications that require several weeks or months of work for several web developers, Symfony is our framework of choice.

What PHP Can Be
Have you ever wondered how your life as a PHP developer would be different if that one feature you want was added? I've made the thought experiment quite a few times already, and came to surprising conclusions.

Tutorials and Talks

Unpacking Binary Data in PHP
Working with binary files in PHP is rarely a requirement. However when needed the PHP ‘pack’ and ‘unpack’ functions can help you tremendously. To set the stage we will start with a programming problem, this will keep the discussion anchored to a relevant context. The problem is this : We want to write a function that takes a image file as an argument and tells us whether the file is a GIF image; irrelevant with whatever the extension the file may have. We are not to use any GD library functions.

PHP Version Support and Fatal Errors
This document covers WP Google Maps PHP Version Support and is relevant to users experiencing fatal errors after updating to Version 7.

WordPress Form Submission The Right Way
Building a WordPress form can be a tricky way to do it, if you want to do it the right way. Done a little bit of study on the subject and found out what is seem to be the right way to handle form submission on WordPress, a quick how to explained below.

Sending a Daily Email with Laravel and Campaign Monitor
Here on Laravel News, we offer multiple ways of staying up to date with new content. Everything from auto-sharing to all the social media channels, a read-only Telegram channel, a weekly newsletter and last March we started offering a daily email digest. To send the daily email we utilize the Laravel scheduler and Campaign Monitor so it’s completely automated. In this tutorial let’s look at how its all setup and how you can easily add this to your site to start sending out automated emails.

Sharing Databases Between Laravel Applications
We have a customer-facing members area and an internal CRM that both work with the same main database. In late 2017, we started migrating our CRM to Laravel as well, in order to modernise the code base a bit, give it a standard structure, and make it easy to make changes to it moving forward. Now that we had two Laravel applications, we started looking at how best to share data between them.

The State of Testing in PHP in 2018
Testing code is an essential aspect of writing software of any level of quality. It’s essential because it helps us know that the code works as it should; whether that’s a specific unit of functionality or the application as a whole from the perspective of the end user. So how is PHP faring in 2018? What’s the state of testing in PHP in 2018? In this article, I’m going to answer that question from a variety of perspectives.

5 Steps to Your First Fixer or Sniff Test
When I wrote my first Sniff 4 years ago I wanted to test it. I expected testing class, that would register sniff, provide ugly code and compare it to fixed one. So I started to explore PHP_CodeSniffer looking for such feature. Found one class, second class, warnings, errors, uff and after 10th error, I closed it. 

Storing Passwords the Right Way
How should passwords be stored? The short answer is: DON’T! I see countless posts on reddit and around the web from people who are trying to figure out how to use PHP’s “new” password functions.  These new functions are awesome in that they have finally made it so that those who are not security specialists can start managing passwords the right way.  PHP’s password functions do things the right way and give us a means by which to ensure our sites can continue to stay secure – even as the red team closes in, coming up with new ways to break the systems.

Laravel Page Cache for Lightning Fast Page Loads
Laravel Page cache is a plugin by Joseph Silber designed to cache HTTP GET responses as static files for lightning fast page loads. This plugin gives you the benefit of a full PHP application, with the benefits of full-page static file caching for all your routes or any specific routes that are static.

An Extremely Picky Developer's Take on PHP Static Site Generators: Part 1 - Sculpin
I was walking around the park a few days ago. It was a bright day, clear sky, I could see kids playing and their parents chatting a few steps away. "This is nice", I thought, "but what about static site generators for PHP?" Well, that's obviously a made-up story. I wasn't at the park and that day it was actually raining, but that’s really what I was thinking.

Understanding Design Patterns - Observer
Defines a one-to-many dependency between objects so that when an object changes state, all of its dependents are notified and updated automatically.

New in Symfony 4.1: Ignore Specific HTTP Codes From Logs
Logging as much information as possible is essential to help you debug the issues found in your applications. However, logging too much information can be as bad as logging too little, because of all the "noise" added to your logs. That's why in Symfony 4.1 we've improved the Monolog integration to allow you exclude log messages related to specific HTTP codes.
News and Announcements

Imagine 2018 - April 23rd-25th 2018, Wynn Las Vegas
Imagine 2018 attracts the biggest innovators in eCommerce. You can network with key merchants, partners, and developers, and join industry leaders in live breakout sessions, customer panels, and keynotes. Can you afford to miss it? The last few tickets are on sale now.

php[tek] Conference - May 31st-June 1st 2018, Atlanta
php[tek] 2018 is the premier PHP conference and annual homecoming for the PHP Community. This conference will be our 13th annual, and php[architect] and One for All Events are excited to continue to host the event in Atlanta! Tickets are on sale now.

Oscon - July 16-19th 2018, Portland
OSCON is the complete convergence of the technologies transforming industries today, and the developers, engineers, and business leaders who make it happen.The 20th Open Source Convention takes place next July. From architecture and performance, to security and data, get expert full stack programming training in open source languages, tools, and techniques. Tickets are on sale now with the Best Price ticket sales ending tomorrow.

CoderCruise - August 30-September 3rd 2018, Ft. Lauderdale, FL
Tired of the usual web technology conference scene? Want a more inclusive experience that lets you get to know your fellow attendees and make connections? Well, CoderCruise was designed to be just this. It's a polyglot developer conference on a cruise ship! This year we will be taking a 5-day, 4-night cruise out of Ft. Lauderdale, FL that includes stops at Half Moon Cay and Nassau. Tickets are on sale now.

Podcasts

Laravel News Podcast LN61: Releases, Live Events, and Eloquent Eloquent
Jake and Michael discuss all the latest Laravel releases, tutorials, and happenings in the community. 

Post Status Draft Podcast - The Future of Content Distribution
This week the Brians put their brains together and discuss content distribution across various mediums and platforms as well as subscriptions for both digital and physical products.

Ember Four Years Later
Chad Hietala joined the show to talk with us about the long history of Ember.js, how he first got involved, his work at LinkedIn and his work as an Ember Core team member, how the Ember team communicates expectations from release to release, their well documented RFC process, ES Classes in Ember, Glimmer, and where Ember is being used today.

Reading and Viewing

How Anyone Can Write a Post Here
Do you want to write about PHP, but don't have a blog? Do you have some ideas you'd like to share, but don't have time and know-how to spread them over social networks? Do you want to share your ideas to hundreds of listening programmers? Just write a post in Markdown and send PR to this open-source blog.

Cloudways Interview - Raúl E Watson Shares Magento And Ecommerce Developments, And Personal Experiences
Raúl E Watson is a well-known Certified Magento Professional. He is currently associated with Space48, a prolific Magento development agency based out of United Kingdom (UK). He has more than ten years of experience under his belt and has worked on some fantastic award-winning ecommerce projects.

Uncovering Drupalgeddon 2
Two weeks ago, a highly critical (25/25 NIST rank) vulnerability, nicknamed Drupalgeddon 2 (SA-CORE-2018-002 / CVE-2018-7600), was disclosed by the Drupal security team. This vulnerability allowed an unauthenticated attacker to perform remote code execution on default or common Drupal installations. Until now details of the vulnerability were not available to the public, however, Check Point Research can now expand upon this vulnerability and reveal exactly how it works.

The PHP Lands Map
Explore the PHP language and ecosystem in a fun and interactive way using a pirate map.

Jobs

LaraTalent - Companies apply to YOU
LaraTalent is a reverse job board. We find the best PHP developers and showcase them to companies looking to hire the best talent.

Senior PHP Front End Developer - Limassol, Cyprus
Cooperating closely with the design team and content writers to implement any necessary changes to multiple company websites. Developing and testing new features. Overseeing the correct functionality of the multiple company websites and solving any problems these websites encounter and/or liaising with the appropriate expert. Performing routine site maintenance as needed and detecting errors. Staying abreast of the latest developments in his/her field, emerging technologies and services that may enhance the web experience. Making relevant recommendations to the PHP FED team. Assisting other departments with any queries related to PHP FED team responsibilities.

Senior PHP Back End Developers - Limassol, Cyprus
Gathering requirements, designing and implementing new features/projects. Maintaining and refactoring existing web applications such as the Company’s payment gateway. Resolving support tickets for IT related issues. Researching and integrating new web technologies. Collaborating with other departments or IT staff members.






Do you have a position that you would like to fill? PHP Weekly is ideal for targeting developers and the cost is only $50/week for an advert.  Please let me know if you are interested by emailing me at katie@phpweekly.com

Interesting Projects, Tools and Libraries

pvm
The process virtual machine. Build workflows\business processes with ease.

laravel-auto
A Laravel helper package to make automated lists with filters, sorting and paging like no other.

mapbender
This is the Mapbender module, the main-component of the Mapbender application.

mylittleforum
A simple PHP and MySQL based internet forum that displays the messages in classical threaded view (tree structure).

tripod-php
Object Graph Mapper for managing RDF data in Mongo.

voten
Voten.co is an open-source, beautiful, highly customisable yet deadly simple, and warm community. 

wbf
WBF is an extensive WordPress framework.

geodesy-php
Geodesy-PHP is a PHP port of some known geodesic functions for getting distance from a known point A to a known point B. Given their latitude and longitude.

phpenums
Provides enumerations for PHP & frameworks integrations.

myacc
MyAAC is a free and open-source Automatic Account Creator (AAC) and Content Management System (CMS) written in PHP.

auth-tests
Always-current tests for `php artisan auth:make` command. Curated by the community. 

akaunting
A free, open source and online accounting software designed for small businesses and freelancers.

anspress
The most complete question and answer system for WordPress.

zend-expressive
Builds on zend-stratigility to provide a minimalist PSR-7 middleware framework for PHP.

Please help us by clicking to our sponsor:

encrypt php scripts 
Protect your PHP Code
Why not try SourceGuardian 11. Click here to download a 14 Day Trial copy. Protect your code using Windows, Linux or Mac and run everywhere with our free Loaders.
 

So, how did you like this issue?

Like us on FacebookFollow us on Twitter
We are still trying to grow our list. If you find PHP Weekly useful please tweet about us! Thanks.
Also, if you have a site or blog related to PHP then please link through to our site.

unsubscribe from this list | update subscription preferences 
 
Copyright © 2018 PHP Weekly, All rights reserved.
Email Marketing Powered by MailChimp

Slim 3.10.0 released

Published 19 Apr 2018 by in Slim Framework Blog.

We are delighted to release Slim 3.10.0. This version has a couple of minor new features and a couple of bug fixes.

The most noticeable improvement is that we now support $app->redirect('/from', '/to') to allow quick and easy redirecting of one path to another without having to write a route handler yourself. We have also added support for the SameSite flag in Slim\Http\Cookies

As usual, there are also some bug fixes, particularly we no longer override the Host header in the request if it’s already defined.

The full list of changes is here


Pittsburgh, We’ll See Yinz at RailsConf!

Published 18 Apr 2018 by Jaime Woo in The DigitalOcean Blog.

Pittsburgh, We’ll See Yinz at RailsConf!

RailsConf has left the desert and makes its way to Steel City April 17-19, 2018. We’ll have Sam Phippen presenting, and several DO-ers checking out talks and tending our booth. Here’s what you need to know about RailsConf 2018.

In Sam’s talk, “Quick and easy browser testing using RSpec and Rails 5.1,” you'll learn about the new system specs in RSpec, how to set them up, and what benefits they provide. It’s for anyone wanting to improve their RSpec suite with full-stack testing.

From the talk description:

Traditionally doing a full-stack test of a Rails app with RSpec has been problematic. The browser wouldn't automate, capybara configuration would be a nightmare, and cleaning up your DB was difficult. In Rails 5.1 the new 'system test' type was added to address this. With modern RSpec and Rails, testing every part of your stack including Javascript from a browser is now a breeze.

Make sure you don’t miss it, Thursday, April 19, from 10:50 AM-11:30 AM in the Spirit of Pittsburgh Ballroom. If you’re interested in RSpec, you might dig his talk from 2017, “Teaching RSpec to Play Nice with Rails.”

You can also catch us in the Exhibit Hall, at booth number 520. The Hall is on Level 2, in Hall A. We’ll be hanging at our booth Wednesday, April 18 from 9:30 AM-6:00 PM, and Thursday, April 19 from 9:30 AM-5:15 PM.

See you there, or, as they say in Pittsburgh, meechinsdahnair!


MediaWiki with two database servers

Published 18 Apr 2018 by Sam Wilson in Sam's notebook.

I’ve been trying to replicate locally a bug with MediaWiki’s GlobalPreferences extension. The bug is about the increased number of database reads that happen when the extension is loaded, and the increase happens not on the database table that stores the global preferences (as might be expected) but rather on the ‘local’ tables. However, locally I’ve had all of these running on the same database server, which makes it hard to watch the standard monitoring tools to see differences; so, I set things up on two database servers locally.

Firstly, this was a matter of starting a new MySQL server in a Docker container (accessible at 127.0.0.1:3305 and with its data in a local directory so I could destroy and recreate the container as required):

docker run -it -e MYSQL_ROOT_PASSWORD=pwd123 -p3305:3306 -v$PWD/mysqldata:/var/lib/mysql mysql

(Note that because we’re keeping local data, root’s password is only set on the first set-up, and so the MYSQL_ROOT_PASSWORD can be left off future invocations of this command.)

Then it’s a matter of setting up MediaWiki to use the two servers:

$wgLBFactoryConf = [
	'class' => 'LBFactory_Multi',
	'sectionsByDB' => [
		// Map of database names to section names.
		'mediawiki_wiki1' => 's1',
		'wikimeta' => 's2',
	],
	'sectionLoads' => [
		// Map of sections to server-name/load pairs.
		'DEFAULT' => [ 'localdb'  => 0 ],
		's1' => [ 'localdb'  => 0 ],
		's2' => [ 'metadb' => 0 ],
	],
	'hostsByName' => [
		// Map of server-names to IP addresses (and, in this case, ports).
		'localdb' => '127.0.0.1:3306',
		'metadb' => '127.0.0.1:3305',
	],
	'serverTemplate' => [
		'dbname'        => $wgDBname,
		'user'          => $wgDBuser,
		'password'      => $wgDBpassword,
		'type'          => 'mysql',
		'flags'         => DBO_DEFAULT,
		'max lag'       => 30,
	],
];
$wgGlobalPreferencesDB = 'wikimeta';

Episode 6: Daren Welsh and James Montalvo

Published 17 Apr 2018 by Yaron Koren in Between the Brackets: a MediaWiki Podcast.

Daren Welsh and James Montalvo are flight controllers and instructors at the Extravehicular Activity (EVA) group at the Johnson Space Center at NASA. They first set up MediaWiki for their group in 2011; since then, they have overseen the spread of MediaWiki throughout the flight operations directorate at Johnson Space Center. They have also done a significant amount of MediaWiki development, including, most recently, the creation of Meza, a Linux-based tool that allows for easy installation and maintenance of MediaWiki.

Links for some of the topics discussed:


Firefox Add-on to skip mobile Wikipedia redirect

Published 14 Apr 2018 by legoktm in The Lego Mirror.

Skip Mobile Wikipedia on Firefox Add-ons

Lately, I've been reading Wikipedia on my phone significantly more than I used to. I get 15 minutes on the train each morning, which makes for some great reading time. But when I'm on my phone, Wikipedia redirects to the mobile website. I'm sure there are some people out there who love it, but it's not for me.

There's a "Desktop" button at the bottom of the page, but it's annoying and inconvenient. So I created my first Firefox Add-on, "Skip Mobile Wikipedia". It rewrites all requests to the mobile Wikipedia website to the standard canonical domain, and sets a cookie to prevent any further redirects. It works on the standard desktop Firefox and on Android.

Install the Add-on and view the source code.


April Community Doers: Meetup Edition

Published 13 Apr 2018 by Daniel Zaltsman in The DigitalOcean Blog.

April Community Doers: Meetup Edition

On the six-year voyage toward becoming the cloud platform for developers and their teams, we have received tremendous support from the larger developer community. We’ve seen hundreds of Meetups organized, pull requests submitted, tutorials written, and Q&As contributed, with even more ongoing activity. To show our appreciation, last month we introduced a new way to highlight some of our most active community contributors - our Community Doers!

Community Doers help make the community better through the content they create and the value they add. In addition to the Community homepage, we’ll regularly highlight Community Doers on the blog, Infrastructure as a Newsletter, social media, and to our growing internal community. In March, we were excited to bring you the trio of Marko, Mateusz, and Peter. This month, with a focus on our global Meetup community, we have three new individuals for you to get to know and celebrate with us. Without further ado, meet April’s featured Community Doers:

Aditya Patawari (@adityapatawari)

Aditya is an early adopter and advocate of DigitalOcean, so it’s no surprise that he became the first organizer of our second largest Meetup group, based in Bangalore. He has been producing Meetups since 2016 and has served as a speaker and panelist at consecutive DigitalOcean TIDE conferences. His talk on foolproofing business through infrastructure gap analysis was well received at TIDE New Delhi, and we later invited him to conduct an online webinar on setting up a multi-tier web application with Ansible. We’re extremely proud and excited to be working with him because of his passion for education and for helping the wider community.

Samina Fu (@sufuf3149)

For the second month running, we are proud to highlight the work of our active Taiwan community. Specifically, we are excited to recognize Samina Fu, a Network and Systems Engineering graduate of National Chiao Tung University in Taiwan. Samina is a co-organizer of our Hsinchu community, which she has been bringing together since early 2017. She helped to organize our first of 120 Hacktoberfest Meetups last year, and works closely with Peter Hsu (who we highlighted last month) as a core contributor to the CDNJS project.

David Endersby (@davidendersby1)

When David filled out our Meetup Organizer Application Form in September 2016, we didn’t know he would go on to lead one of our largest and most active Meetup communities. Since early 2017, David has worked hard to develop a blueprint for successfully running a new Meetup community, covering everything from starting out, to finding speakers, to time management, choosing a location, feeding attendees, and more. His efforts have produced a wealth of content and he has an ambitious plan for 2018. If you’re interested in joining, he welcomes you with open arms!


Aditya’s, Samina’s, and David’s efforts exemplify the qualities we are proud to see in our community. They all have a knack for educating the community (off- and online), promoting both learning and community collaboration. But there are so many others we have yet to recognize! We look forward to highlighting more of our amazing community members in the months to come.

Are you interested in getting more involved in the DigitalOcean community? Here are a few places to start:

Know someone who fits the profile? Nominate a member to be recognized in the comments!


Untitled

Published 10 Apr 2018 by Sam Wilson in Sam's notebook.

I find autogenerated API docs for Javascript projects (e.g.) so much more useful than those for PHP projects.


Morning joy

Published 9 Apr 2018 by Sam Wilson in Sam's notebook.

I love the morning time, while the brain is still sharp enough to focus on one thing and get it done, but dull enough not to remember the other things and derail everything with panic about there being too much to do. The morning is when the world properly exists, and is broad and friendly.


Email doesn’t disappear

Published 9 Apr 2018 by Bron Gondwana in FastMail blog.

More and more often we are seeing stories like this one from Facebook about who has control over your messages on closed platforms.

I keep saying in response: email is your electronic memory. Your email is your copy of a conversation. Nobody, from the lowliest spammer to the grand exulted CEO of a massive company, can remove or change the content of an email message they have sent to you.

At first glance, Facebook Messenger seems to work the same way. You can delete your copy of any message in a conversation, but the other parties keep their unchanged copy. However, it turns out that insiders with privileged access can change history for somebody else, creating an effect similar to gaslighting where you can no longer confirm your recollection of what was once said.

In short, centralised social networks are not a safe repository for your electronic memory. They can change their policies and retroactively change messages underneath you.

With email, it’s all based on open standards, and you can choose a provider you trust to retain messages for you.

FastMail is a provider you can trust

We have built our business on a very simple proposition: we proudly charge money in exchange for providing a service. This means our loyalties are not split. We exist to serve your needs.

Our top three values are all about exactly this. You are our customer, your data belongs to you, we are good stewards of your data.

The right to remember, and the right to forget

We provide tools to allow you to implement rules around retention (for example, you can have your Trash folder automatically purge messages after 30 days), but we don’t ever remove messages without your consent and intent.

If you do delete messages, we don’t destroy them immediately, because our experience has shown that people make mistakes. We allow a window of between one and two weeks in which deleted messages can be recovered (see technical notes at the end of this post for exact details).

Since 2010, our self-service tool has allowed you to restore those recently deleted messages. We don't charge using this service, it’s part of making sure that decisions about your data are made by you, and helping you recover gracefully from mistakes.

Because we only scan message content to build the indexes that power our great search tools and (on delivery) for spam protection – once messages are deleted, they’re really gone. You have the right to forget emails you don’t want to keep.

You’re in control

Thanks as always to our customers who choose what to remember, and what to forget. It’s your email, and you are in control of its lifecycle. Our role is to provide the tools to implement your will.

Nobody else decides how long you keep your email for, and nobody can take back a message they’ve sent you. Your email, your memory, your choice.

An Update

Since I started drafting this article, Facebook have doubled down on the unsend feature, saying that they will make it possible for anybody to remove past messages.

While it's definitely more equitable, I still don't think this is a good idea. People will work around it by screenshotting conversations, and it just makes the platform more wasteful of everybody's time and effort. Plus it's much easier to fake a screenshot than to fake up a live Facebook Messenger interface while scrolling back to show messages.

There are really a lot of bad things about unreliable messaging systems, which is exactly what Wired has to say about this rushed and poorly thought-out feature. Stick with email for important communications.


Technical notes:

We currently purge messages every Sunday when the server load is lowest – and only messages which were deleted over a week ago. Therefore the exact calculation for message retention is one week plus the time until the next Sunday plus however long it takes the server to get to your mailbox as it scans through all the mailboxes containing purged messages. Deleting files is surprisingly expensive on most filesystems, which is why we save it until the servers are least busy.

We also have backups, which may retain deleted messages for longer based on repack schedules, but which can’t be automatically restored with messages that were deleted longer than two weeks ago.


cardiParty 2018-04 Melbourne Open Mic Night

Published 8 Apr 2018 by Justine in newCardigan.

a GLAMRous storytelling event 20 April 2018 6.30pm

Find out more...


Untitled

Published 6 Apr 2018 by Sam Wilson in Sam's notebook.

I want a login-by-emailed-link feature for MediaWiki, so it’s easier to log in from mobile.


Wikidata Map March 2018

Published 6 Apr 2018 by addshore in Addshore.

It’s time for the first 2018 installation of the Wikidata Map. It has been roughly 4 months since the last post, which compared July 2017 to November 2017. Here we will compare November 2017 to March 2018. For anyone new to this series of posts you can check back at the progression of these maps by looking at the posts on the series page.

Each Wikidata Item with a Coordinate Location(P625)will have a single pixel dot. The more Items present, the more pixel dots and the more the map will glow in that area. The pixel dots are plotted on a totally black canvas, so any land mass outline simply comes from the mass of dots. You can find the raw data for these maps and all historical maps on Wikimedia Tool Labs.

Looking at the two maps below (the more recent map being on the right) it is hard to see the differences by eye, which is why I’ll use ImageMagik to generate a comparison image. Previous comparisons have used Resemble.js.

ImageMagik has a compare script that can highlight areas of change in another colour, and soften the unchanged areas of the image. The image below highlights the changed areas in violet while fading everything that remains unchanged between the two images. As a result all areas highlighted in violet have either had Items added or removed. These areas can then be compared with the originals to confirm that these areas are in fact additions.

If you want to try comparing two maps, or two other images, using ImageMagik then you can try out https://online-image-comparison.com/ which allows you to do this online!

What has changed?

The main areas of change that are visible on the diff are:

There is a covering of violet across the entire map, but these are the key areas.

If you know the causes for these areas of greatest increase, or think I have missed something important, then leave a comment below and I’ll be sure to update this post with links to the projects and or users.

Files on Commons

All sizes of the Wikidata map for March have been uploaded to Wikimedia Commons.

The post Wikidata Map March 2018 appeared first on Addshore.


New MediaWiki extension: AutoCategoriseUploads

Published 5 Apr 2018 by Sam Wilson in Sam's notebook.

New MediaWiki extension: AutoCategoriseUploads. It “automatically adds categories to new file uploads based on keyword metadata found in the file. The following metadata types are supported: XMP (many file types, including JPG, PNG, PDF, etc.); ITCP (JPG); ID3 (MP3)”.

Unfortunately there’s no code yet in the repository, so there’s nothing to test. Sounds interesting though.


See My Hat! new exhibition for children and families coming soon

Published 3 Apr 2018 by carinamm in State Library of Western Australia Blog.

SeeMyHat_JGP

Studio portrait of Ella Mackay wearing a paper hat, 1915, State Library of Western Australia, 230179PD

Featuring photographs and picture books from the State Library collections this exhibition is designed especially for children and families.  Dress hats, uniform hats, fancy dress hats are just some of the millinery styles to explore. Children and their families have the opportunity to make a hat and share a picture book together.

See My Hat! will be on display in the Story Place Gallery, Mezzanine floor from Tuesday 10 April – Wednesday 11 July.


Episode 5: Brian Wolff

Published 3 Apr 2018 by Yaron Koren in Between the Brackets: a MediaWiki Podcast.

Brian Wolff (username Bawolff) works in the Security team at the Wikimedia Foundation, and has been doing MediaWiki and MediaWiki extension development since 2009.   Links for some of the topics discussed:

From 0 to Kubernetes cluster with Ingress on custom VMs

Published 2 Apr 2018 by addshore in Addshore.

While working on a new Mediawiki project, and trying to setup a Kubernetes cluster on Wikimedia Cloud VPS to run it on, I hit a couple of snags. These were mainly to do with ingress into the cluster through a single static IP address and some sort of load balancer, which is usually provided by your cloud provider. I faffed around with various NodePort things, custom load balancer setups and ingress configurations before finally getting to a solution that worked for me using ingress and a traefik load balancer.

Below you’ll find my walk through, which works on Wikimedia Cloud VPS. Cloud VPS is an openstack powered public cloud solution. The walkthrough should also work for any other VPS host or a bare metal setup with few or no alterations.

Step 0 – Have machines to run Kubernetes on

This walkthrough will use 1 master and 4 nodes, but the principle should work with any other setup (single master single node OR combined master and node).

In the below setup m1.small and m1.medium are VPS flavours on Wikimedia Cloud VPS. m1.small has 1 CPU, 2 GB mem and 20 GB disk; m1.medium has 2 CPU, 4 GB mem and 40 GB disk. Each machine was running debian-9.3-stretch.

One of the nodes needs to have a publicly accessible IP address (Floating IP in on Wikimedia Cloud VPS). In this walkthrough we will assign this to the first node, node-01. Eventually all traffic will flow through this node.

If you have firewalls around your machines (as is the case with Wikimedia Cloud VPS) then you will also need to setup some firewall rules. The ingress rules should probably be slightly stricter as the below settings will allow ingress on any port.

Make sure you turn swap off, or you will get issues with kubernetes further down the line (I’m not sure if this is actually the correct way to do this, but it worked for my testing):

sudo swapoff -a
sudo sed -i \'/ swap /d\' /etc/fstab

Step 1 – Install packages (Docker & Kubernetes)

You need to run the following on ALL machines.

These instructions basically come from the docs for installing kubeadm, specifically, the docker and kube cli tools section.

If these machines are new, make sure you have updated apt:

sudo apt-get update

And install some basic packages that we need as part of this install step:

sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common

Next add the Docker and Kubernetes apt repos to the sources and update apt again:

sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
sudo add-apt-repository "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
sudo echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" &gt; /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update

Install Docker:

sudo apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')

Install the Kube packages:

sudo apt-get install -y kubelet kubeadm kubectl

You can make sure that everything installed correctly by checking the docker and kubeadm version on all machines:

docker --version
kubeadm version

Step 2.0 – Setup the Master

Setup the cluster with a CIDR range by running the following:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

The init command will spit out a token, you can choose to copy this now, but don’t worry, we can retrieve it later.

At this point you can choose to update your own user .kube config so that you can use kubectl from your own user in the future:

mkdir -p $HOME/.kube
rm -f $HOME/.kube/config
sudo cp -if /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Setup a Flannel virtual network:

sudo sysctl net.bridge.bridge-nf-call-iptables=1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

These yml files are coming directly from the coreos/flannel git repository on GitHub and you can easily pin these files at a specific commit (or run them from your own copies). I used kube-flannel.yml and kube-flannel-rbac.yml

Step 2.1 – Setup the Nodes

Run the following for networking to be correctly setup on each node:

sudo sysctl net.bridge.bridge-nf-call-iptables=1

In order to connect the nodes to the master you need to get the join command by running the following on the master:

sudo kubeadm token create --print-join-command

Then run this join on command (the one output by the command above) on each of the nodes. For example:

sudo kubeadm join 10.68.17.50:6443 --token whverq.hwixqd5mb5dhjz1f --discovery-token-ca-cert-hash sha256:d15bb42ebb761691e3c8b49f31888292c9978522df786c4jui817878a48d79b4

Step 2.2 – Setup the Ingress (traefik)

On the master, mark node-01 with a label stating that it has a public IP address:

kubectl label nodes node-01 haspublicip=true --overwrite

And apply a manifest traefik:

kubectl apply -f https://gist.github.com/addshore/a29affcf75868f018f2f586c0010f43d

This manifest is coming from a gist on GitHub. Of course you should run this from a local static copy really.

Step 3.0 – Setup the Kubernetes Dashboard

This isn’t really required, at this stage your kubernetes cluster should already be working, but for testing things and visualizing the cluster the kubernetes dashboard can be a nice bit of eye candy.

You can use this gist deployment manifest to run the dashboard.

Note: You should alter the Ingress configuration at the bottom of the manifest. Ingress is currently set to kubernetes-dashboard.k8s-example.addshore.com and kubernetes-dashboard-secure.k8s-example.addshore.com. Some basic authentication is also added with the username “dashuser” and password “dashpass”

Step 3.1 – Setup a test service (guids)

Again, your cluster should all be setup at this point, but if you want a simple service to play around with you can use the alexellis2/guid-service docker image which was used in the blog post “Kubernetes on bare-metal in minutes

You can use this gist deployment manifest to run the service.

Note: You should alter the Ingress configuration at the bottom of the manifest. Ingress is currently set to guids.k8s-example.addshore.com.

This service returns simple GUIDs, including the container name that guid was generated from. For example:

$ curl http://guids.k8s-example.addshore.com/guid
{"guid":"fb426500-4668-439d-b324-6b34d224a7df","container":"guids-5b7f49454-2ct2b"}

Automating this setup

While setting up my own kubernetes cluster using the steps above I actually used the python library and command line tool called fabric.

This allowed me to minimize my entire installation and setup to a few simple commands:

fab provision
fab initCluster
fab setupIngressService
fab deployDashboard
fab deployGuids

I might write a blog post about this in the future, until then fabric is definitely worth a read. I much prefer it to other tools (such as ansible) for fast prototyping and repeatability.

Other notes

This setup was tested roughly 1 hour before writing this blog post with some brand new VMs and everything went swimmingly, however that doesn’t mean things will go perfectly for you.

I don’t think I ever correctly set swap to remain off for any of the machines.

If a machine goes down, it will not rejoin the cluster, you will have to manually rejoin it (the last part of step 2.1).

The post From 0 to Kubernetes cluster with Ingress on custom VMs appeared first on Addshore.


v2.4.8

Published 2 Apr 2018 by fabpot in Tags from Twig.


Digital preservation begins at home

Published 29 Mar 2018 by Jenny Mitcham in Digital Archiving at the University of York.

A couple of things happened recently to remind me of the fact that I sometimes need to step out of my little bubble of digital preservation expertise.

It is a bubble in which I assume that everyone knows what language I'm speaking, in which everyone knows how important it is to back up your data, knows where their digital assets are stored, how big they might be and even what file formats they hold.

But in order to communicate with donors and depositors I need to move outside that bubble otherwise opportunities may be missed.

A disaster story

Firstly a relative of mine lost their laptop...along with all their digital photographs, documents etc.

I won't tell you who they are or how they lost it for fear of embarrassing them...

It wasn’t backed up...or at least not in a consistent way.

How can this have happened?

I am such a vocal advocate of digital preservation and do try and communicate outside my echo chamber (see for example my blog for International Digital Preservation Day "Save your digital stuff!") but perhaps I should take this message closer to home.

Lesson #1:

Digital preservation advocacy should definitely begin at home

When a back up is not a back up...

In a slightly delayed response to this sad event I resolved to help another family member ensure that their data was 'safe'. I was directed to their computer and a portable hard drive that is used as their back up. They confessed that they didn’t back up their digital photographs very often...and couldn’t remember the last time they had actually done so.

I asked where their files were stored on the computer and they didn’t know (well at least, they couldn’t explain it to me verbally).

They could however show me how they get to them, so from that point I could work it out. Essentially everything was in ‘My Documents’ or ‘My Pictures’.

Lesson #2:

Don’t assume anything. Just because someone uses a computer regularly it doesn’t mean they know where they put things.

Having looked firstly at what was on the computer and then what was on the hard drive it became apparent that the hard drive was not actually a ‘back up’ of the PC at all, but contained copies of data from a previous PC.

Nothing on the current PC was backed up and nothing on the hard drive was backed up.

There were however multiple copies of the same thing on the portable hard drive. I guess some people might consider that a back up of sorts but certainly not a very robust one.

So I spent a bit of time ensuring that there were 2 copies of everything (one on the PC and one on the portable hard drive) and promised to come back and do it again in a few months time.

Lesson #3:

Just because someone says they have 'a back up' it does not mean it actually is a back up.

Talking to donors and depositors

All of this made me re-evaluate my communication with potential donors and depositors.

Not everyone is confident in communicating about digital archives. Not everyone speaks the same language or uses the same words to mean the same thing.

In a recent example of this, someone who was discussing the transfer of a digital archive to the Borthwick talked about a 'database'. I prepared myself to receive a set of related tables of structured data alongside accompanying documentation to describe field names and table relationships, however, as the conversation evolved it became apparent that there was actually no database at all. The term database had simply been used to describe a collection of unstructured documents and images.

I'm taking this as a timely reminder that I should try and leave my assumptions behind me when communicating about digital archives or digital housekeeping practices from this point forth.










Digital preservation begins at home

Published 29 Mar 2018 by Jenny Mitcham in Digital Archiving at the University of York.

A couple of things happened recently to remind me of the fact that I sometimes need to step out of my little bubble of digital preservation expertise.

It is a bubble in which I assume that everyone knows what language I'm speaking, in which everyone knows how important it is to back up your data, knows where their digital assets are stored, how big they might be and even what file formats they hold.

But in order to communicate with donors and depositors I need to move outside that bubble otherwise opportunities may be missed.

A disaster story

Firstly a relative of mine lost their laptop...along with all their digital photographs, documents etc.

I won't tell you who they are or how they lost it for fear of embarrassing them...

It wasn’t backed up...or at least not in a consistent way.

How can this have happened?

I am such a vocal advocate of digital preservation and do try and communicate outside my echo chamber (see for example my blog for International Digital Preservation Day "Save your digital stuff!") but perhaps I should take this message closer to home.

Lesson #1:

Digital preservation advocacy should definitely begin at home

When a back up is not a back up...

In a slightly delayed response to this sad event I resolved to help another family member ensure that their data was 'safe'. I was directed to their computer and a portable hard drive that is used as their back up. They confessed that they didn’t back up their digital photographs very often...and couldn’t remember the last time they had actually done so.

I asked where their files were stored on the computer and they didn’t know (well at least, they couldn’t explain it to me verbally).

They could however show me how they get to them, so from that point I could work it out. Essentially everything was in ‘My Documents’ or ‘My Pictures’.

Lesson #2:

Don’t assume anything. Just because someone uses a computer regularly it doesn’t mean they know where they put things.

Having looked firstly at what was on the computer and then what was on the hard drive it became apparent that the hard drive was not actually a ‘back up’ of the PC at all, but contained copies of data from a previous PC.

Nothing on the current PC was backed up and nothing on the hard drive was backed up.

There were however multiple copies of the same thing on the portable hard drive. I guess some people might consider that a back up of sorts but certainly not a very robust one.

So I spent a bit of time ensuring that there were 2 copies of everything (one on the PC and one on the portable hard drive) and promised to come back and do it again in a few months time.

Lesson #3:

Just because someone says they have 'a back up' it does not mean it actually is a back up.

Talking to donors and depositors

All of this made me re-evaluate my communication with potential donors and depositors.

Not everyone is confident in communicating about digital archives. Not everyone speaks the same language or uses the same words to mean the same thing.

In a recent example of this, someone who was discussing the transfer of a digital archive to the Borthwick talked about a 'database'. I prepared myself to receive a set of related tables of structured data alongside accompanying documentation to describe field names and table relationships, however, as the conversation evolved it became apparent that there was actually no database at all. The term database had simply been used to describe a collection of unstructured documents and images.

I'm taking this as a timely reminder that I should try and leave my assumptions behind me when communicating about digital archives or digital housekeeping practices from this point forth.










The challenge of calendaring

Published 29 Mar 2018 by David Gurvich in FastMail blog.

The challenge of calendaring

We often focus on email functionality as it is the main focus of our product. However, FastMail has two other components - calendaring and contacts.

In this post we’re focusing on our calendar.

While calendaring has become an integral part of our flagship service, our calendar feature was only introduced in 2014, making it still relatively young in the history of FastMail. Remember we’ve been around since 1999, which might equate to around 100 in modern tech years…

Just like with email, providing a calendar function presents its own challenges. In short, doing calendaring well is, well, hard. One of the main reasons is that standards related to calendaring are still over the place. We’re working hard on making these standards more consistent so that we can improve online calendaring for everyone.

One of our core values is a commitment to open standards. We’re not looking to create a walled garden by developing proprietary technology where your data is locked down to one source or provider.

With FastMail continuing to use CalDAV and iCalendar it helps to drive open standards in online calendaring and helps us to help you to use your information as you choose, syncing between different service providers and devices (as with email).

The data in your FastMail calendars are stored in open formats and can be downloaded or backed up using any number of standard tools that speak standard protocols.

Community-minded calendaring

We are responsible members of many open source communities. We use, create, sponsor and contribute back to a number of projects, including the Cyrus email server.

A significant part of FastMail’s infrastructure runs on Cyrus, the open source email communication technology that was initially developed at CMU.

Right now one of our biggest projects is implementing JMAP as a new standard, which will help to extend the functionality of calendaring and replace CalDAV.

In order for us to live our values we also invest in our people. And when it comes to calendaring we’ve got a great team that helps us to improve and advance calendaring for all of our users, and hopefully the internet in general.

Ken Murchsion, one of our calendar experts, was crucial to getting calendaring off the ground. Without Ken, calendaring and Cyrus may have never happened.

When Cyrus lacked any calendaring functionality it was Ken, then a CMU employee, who took up a casual challenge as a pet project and managed to build a calendaring function with very basic features.

Ken is quick to point out part of Cyrus’ calendaring ongoing development was made possible by attending CalConnect and meeting and speaking to other developers.

The challenge of calendaring

Ken met Bron around the 2.5 release of Cyrus, and this fortuitous meeting has laid the foundation for several improvements to the calendar and ongoing CalConnect attendances (and of course, Ken becoming a permanent member of the FastMail team).

For the last few years FastMail has been a member of CalConnect and attending this conference really is important to our ongoing development. Robert, another important part of our calendar team, recently wrote about the importance of CalConnect to FastMail.

Looking ahead

We’re hoping to see JMAP recognized as a new standard during 2018 and once this is fully implemented it will help to see many more improvements across email, calendars and contacts.

At a top level this will help to continually improve backend, performance, scheduling and subscriptions.

At a feature level we’re already testing out some exciting new technology. One of these being ‘consensus scheduling’ – recently discussed at CalConnect - which takes the original scheduling functionality and enables a client to send multiple time options for a meeting or appointment to a group of people. So instead of going back and forth to confirm a meeting time it can all be done within the calendar.

Another feature we’ve started to explore is a polling function that could eventually be applied to things such as meeting confirmations for service providers, further reducing the reliance on telephone-based appointment making. Currently, a formal RFC is underway to help implement a standard.

We’re looking forward to introducing ongoing calendar improvements and features into FastMail and we’ll formally announce these as they enter our production environment.

A special event on the calendar

Earlier this year Ken was the ninth recipient of the CalConnect Distingushed Service Award.

The challenge of calendaring

This award is a testament to Ken’s dedication to improving calendaring specification and standards. He is also the author of several RFCs and specifications, which have helped to define calendaring for users the world over.

Reflecting on his achievement, Ken remains as modest as ever, “it’s this interaction with other developers (in attending CalConnect) that is so important, testing and banging out code together.”

Ken’s achievements in the calendaring space are immense and he continues to help improve calendaring for all of us.

As our CEO Bron noted, “CyrusIMAP now has best-in-the-world implementation of calendars and contacts due to Ken’s involvement in CalConnect.”

Well done Ken!


Speaker Profile: Donna Edwards

Published 28 Mar 2018 by Rebecca Waters in DDD Perth - Medium.

Donna Edwards presenting at DDD Perth 2017 (DDD Perth Flickr Album)

Donna Edwards, a well known figure in the Perth software industry, presented at DDD Perth 2017 on Attraction and retention strategies for Women in Tech. She is the Events Manager for Women in Technology WA (WiTWA), on the committee for SQL Saturday, and VP of the Central Communicator Toastmasters club. I asked Donna about her experiences at DDD Perth.

From a Director of ACR, a General Manager at Ignia and more recently, the State Delivery Manager at Readify, you have 20-odd years experience in the IT industry. Can you tell me a little about your career to date?

I’ve worked in different roles within the IT industry from sales, to crawling under desks setting up PCs, to phone support and even installing hardware and software for people. In the past ten years I’ve focused on culture and business growth. My passion has always been creating awesome places to work, winning high quality work and growing a phenomenal team. More than anything I believe life is too short to not love what you do — so follow what you love and everything will work out 😊

Words to live by right there. You’re a seasoned presenter on speaker panels; I’ve seen you speak at a number of events. Was DDD Perth one of your first solo presentations at a conference?

Yes I really enjoy panels and have done quite a few previously however DDD was my first solo presentation (over ten minutes long). Getting selected for a 45 minute slot was a huge achievement and pretty scary I have to admit 😊

What helped you decide to submit a talk to DDD Perth?

I knew that DDD was trying to attract more women presenters after 2016 and I’d never actually submitted for a conference before so I saw it as a challenge! My partner was also submitting so we actually spent a day whilst we were on a cruise sitting out on the deck writing out submissions 😊 we both submitted two talks. I certainly didn’t expect to get selected and was probably hoping not to haha!

That sounds like a bit of a #BeBoldForChange pledge from International Women’s Day 2017. Have you a #PressForProgress goal for 2018?

For me, its always about doing more and continuing to strive to be better both personally as well as achieve more for the community each year. This year I am currently about to take on another three committee roles as well as continuing to focus on taking the WiTWA events to another level. We've sold out our last three events hitting record numbers of attendees (200). It is super exciting to see the level of engaged women in our tech community. Just this week I shared four tech events with all female panels / speakers which is brilliant to see! And it will only get bigger and better 😊

Back to DDD Perth…Did you enjoy the day? How about presenting?

The day was fantastic. I got to hear some brilliant talks from the likes of Patima and Nathan and also got roped into being on a panel with them later in the day! There was a great vibe and everyone seemed to be really enjoying themselves along with lots of familiar faces as is the Perth IT industry 😊 Presenting was actually super fun! We had a few technical issues so it started a bit late which made me a little nervous but once I got started I thoroughly enjoyed the experience. I had done LOADS of practice so I felt pretty comfortable with the slides and content which definitely saved me! It didn’t help that I was in a job interview process and the two potential bosses were both watching my presentation — no pressure. I must have done ok cause I got the job 😉

Oh Wow! That’s an interesting point. As someone who makes hiring decisions for the company you work for, do you like seeing presentations and the like on a curriculum vitae?

Absolutely - whether they get involved in community events by either presenting or volunteering is a huge positive when I am choosing between applicants.

What are you looking forward to seeing in DDD Perth 2018?

The level of diversity for 2017 was great so I’m keen to see that remain or improve for 2018. I’m pretty sure it will be even bigger and better after last year sold out so that’s super exciting! More great sponsors no doubt and hopefully an even bigger after party (which means it will be huge). Finally looking forward to learning a lot — the best thing about DDD is the variety of awesome speakers and topics so you can really tailor the day for what you are interested in.

Thanks for chatting to us, Donna!


Speaker Profile: Donna Edwards was originally published in DDD Perth on Medium, where people are continuing the conversation by highlighting and responding to this story.


Midwest Heritage of Western Australia

Published 27 Mar 2018 by Sam Wilson in Sam's notebook.

Midwest Heritage of Western Australia is a terrific database of records of graves and deceased people in the mid-west region of WA.


Untitled

Published 27 Mar 2018 by Sam Wilson in Sam's notebook.

I joined newCardigan today.


AggregateIQ Brexit and SCL

Published 25 Mar 2018 by addshore in Addshore.

UPDATE 02/04/2018: Looks like AggregateIQ may have had a contract with Cambridge Analytica, but didn’t disclose it because of an NDA… But all spoilt by a unsecure gitlab instance.  https://nakedsecurity.sophos.com/2018/03/28/cambridge-analyticas-secret-coding-sauce-allegedly-leaked/


I wonder why AggregateIQ state that they have never entered a contract with Cambridge Analytica, but don’t mention SCL. Except they do mention they have never been part of SCL or Cambridge Analytica…

Channel 4 report on Brexit and AggregateIQ

From the AggregateIQ website & press release:

AggregateIQ is a digital advertising, web and software development company based in Canada. It is and has always been 100% Canadian owned and operated. AggregateIQ has never been and is not a part of Cambridge Analytica or SCL. Aggregate IQ has never entered into a contract with Cambridge Analytica. Chris Wylie has never been employed by AggregateIQ.
AggregateIQ works in full compliance within all legal and regulatory requirements in all jurisdictions where it operates. It has never knowingly been involved in any illegal activity. All work AggregateIQ does for each client is kept separate from every other client.

Links

The post AggregateIQ Brexit and SCL appeared first on Addshore.


Love is

Published 24 Mar 2018 by jenimcmillan in Jeni McMillan.

Lovers

I am passing through countries, discarding them like forgotten lovers. Now when I think about love, I have many more things to say. I think love is a vulnerability, a willingness to trust someone with a precious heart. To be so child-like and joyous that dancing and singing is a natural state. A heightened awareness of the beloved. A look, a tiny movement, a sigh, a tremor, a breath, a heartbeat, these are the signs that reveal the inner state. But love passes, in the same way that that cities fade into the distance as I travel across Europe. That is what you tell me. And so, I continue my journey.

‘Take your joy and spread it across the world, he wrote.

At least begin with a smile and hug yourself, she thought.’


Resurrecting a MediaWiki instance

Published 24 Mar 2018 by in Posts on The bugalore.

This was my first time backing up and setting up a new MediaWiki (-vagrant) instance from scratch so I decided to document it in the hope that future me might find it useful. We (teams at Wikimedia) often use MediaWiki-Vagrant instances on Labs, err, Cloud VPS to test and demonstrate our projects. It’s also pretty handy to be able to use it when one’s local dev environment is out of order (way more common than you’d think).

Stories Behind the Songs

Published 24 Mar 2018 by Dave Robertson in Dave Robertson.

Every song has a story. Here’s a little background on the writing and recording of each of the songs on Oil, Love & Oxygen. It is sometimes geeky, sometimes political and usually personal, though I reserve the right to be coy when I choose!

  1. Close Your Mouth is a funny one to start with, because it’s the most vague in terms of meaning – I think there were ideas floating around in my head about over-thinking in relationships, but it is not about anything specific. The “bed” of this track was a live take with drums and semi-electric guitar using just a pair of ribbon microphones – very minimalist! There is some beautiful crazy saxophone from Professor Merle in the background of the mix at the 1:02 minute mark.
  2. Good Together is one of my oldest songs, and the recording of it started eight years ago! It features catchy accordion from Cat Kohn (now Melbourne based) and a dreamy electric guitar solo from Ken Williford (now works for NASA). The lyrics are fairly direct storytelling, so I don’t feel the need to elaborate.
  3. Oil, Love & Oxygen. I’ve been banging on about the climate crisis for more than twenty years, and this is the song where I most directly address the emotional side of it. For the lyric writing nerds: I used a triplet syllable stress pattern in the verses. The choir part was an impromptu gathering of friends at the end of a house concert. I first played this song as a duo with Marie O’Dwyer who plays the piano part on this version. The almost subliminal organ part is Rachel playing a 1960s electric organ she found on the side of the road.
  4. The Relation Ship I wrote this on the ferry to Rotto. The “pain body” concept in the chorus comes from Eckhart Tolle’s book A New Earth, and is similar to the sankhara concept in Vipassana. For the A capella intro I experimented with double tracking the band singing together around a mid (omni) / side (ribbon) microphone setup, without using headphones.
  5. Perfect as Cats. As I kid I was fascinated by the big cats, especially snow leopards. This song is not about snow leopards. The drums and bass here were the only parts of the album recorded in a purpose built studio (the old Shanghai Twang). Ben Franz plays the double bass and Rob Binelli the drums (one of the six drummers on the album!).
  6. Dull Ache. Sometimes I wished I lived in Greece, Italy, The Philippines, Costa Rica, Mexico, Ecuador, Nigeria or Spain. The common theme here is the siesta! I’m not at my best in the mid arvo, partly because my sensitive eyes get weary in our harsh sun. Around 4 or 5pm the world becomes a softer place to me, and my mojo returns. This song is also more generally about existential angst and depression. Always reach out for support when you need it – it is not easy dealing with these crazy grey soft things behind our eyes. I love Rob’s crazy guitars on the second half of this song – they are two full takes panned either side without edits.
  7. Kissing and Comedy was inspired by a quote from Tom Robbins’s novel Even Cowgirls Get The Blues: “Maybe the human animal has contributed really nothing to the universe but kissing and comedy–but by God that’s plenty.” I wrote it on the Overland Train. The drums are a single playful take by Angus Diggs, recorded in Dave Johnson’s bedroom with my trusty pair of ribbon mics, and the song was built up from there.
  8. Now That We’ve Kissed was co-written with Ivy Penny and is about being kissed by famous people (which I haven’t) and the implications of kisses in general. The things that “come from a kiss” were literally phoned in by friends.
  9. Rogue State was written in 2007, just prior to the Australian federal election and the Bali Climate Change Conference. It reflects on Australia’s sabotage of progress on climate change at the Kyoto conference in 1997, as documented in books such as Guy Pearse’s “High & Dry: John Howard, Climate Change and the Selling of Australia’s Future” and Clive Hamilton’s “Scorcher”. I had no intention of putting this old song on the album, until the last minute when I decided it was still sadly relevant given so many politicians still show a lack of respect and understanding of science and the planet that supports us. The recording of the song was also an excuse to feature a bit of Peter Grayling cello magic.
  10. Montreal was the first song I wrote on ukulele, though I ended up recording it with guitar. Sian Brown, who helped greatly with recording my vocals for the album, makes a harmony cameo at the end of the song. As for the lyrics, it’s a fairly obvious bittersweet love song.
  11. I Stood You Up is my account of attending a fantastic music camp called Rhythm Song, and kicking myself for not following through with a potential jam with songwriting legend Kristina Olsen. One of her pieces of advice to performers is to make their audience laugh to balance out the sadder songs in a set. The song was written in a mad rush two hours before a Song Club when I thought “What music can I write quickly?… Well I don’t have a blues song yet!”. This version was largely recorded prior to The Kiss List taking shape, so it features multiple guest musicians who are listed in the liner notes.
  12. Measuring the Clouds I wrote for my Dad’s birthday a few years ago. He used to be a Weather Observer in the 60s, sending up the big balloons etc. from many locations around WA such as Cocos Island. He had a beautiful eccentric sense of humour and would answer the phone with “Charlie’s Chook House and Chicken Factory, Chief Chook speaking”. The musical challenge I set myself with this song was to use a five bar pattern in the verse. A cello part was recorded, but was dropped in the mixing when I decided it made the song feel too heavy and I wanted it to feel light and airy.

Share


Stories Behind the Songs

Published 24 Mar 2018 by Dave Robertson in Dave Robertson.

Every song has a story. Here’s a little background on the writing and recording of each of the songs on Oil, Love & Oxygen. It is sometimes geeky, sometimes political and usually personal, though I reserve the right to be coy when I choose!

  1. Close Your Mouth is a funny one to start with, because it’s the most vague in terms of meaning – I think there were ideas floating around in my head about over-thinking in relationships, but it is not about anything specific. The “bed” of this track was a live take with drums and semi-electric guitar using just a pair of ribbon microphones – very minimalist! There is some beautiful crazy saxophone from Professor Merle in the background of the mix at the 1:02 minute mark.
  2. Good Together is one of my oldest songs, and the recording of it started eight years ago! It features catchy accordion from Cat Kohn (now Melbourne based) and a dreamy electric guitar solo from Ken Williford (now works for NASA). The lyrics are fairly direct storytelling, so I don’t feel the need to elaborate.
  3. Oil, Love & Oxygen. I’ve been banging on about the climate crisis for more than twenty years, and this is the song where I most directly address the emotional side of it. For the lyric writing nerds: I used a triplet syllable stress pattern in the verses. The choir part was an impromptu gathering of friends at the end of a house concert. I first played this song as a duo with Marie O’Dwyer who plays the piano part on this version. The almost subliminal organ part is Rachel playing a 1960s electric organ she found on the side of the road.
  4. The Relation Ship I wrote this on the ferry to Rotto. The “pain body” concept in the chorus comes from Eckhart Tolle’s book A New Earth, and is similar to the sankhara concept in Vipassana. For the A capella intro I experimented with double tracking the band singing together around a mid (omni) / side (ribbon) microphone setup, without using headphones.
  5. Perfect as Cats. As I kid I was fascinated by the big cats, especially snow leopards. This song is not about snow leopards. The drums and bass here were the only parts of the album recorded in a purpose built studio (the old Shanghai Twang). Ben Franz plays the double bass and Rob Binelli the drums (one of the six drummers on the album!).
  6. Dull Ache. Sometimes I wished I lived in Greece, Italy, The Philippines, Costa Rica, Mexico, Ecuador, Nigeria or Spain. The common theme here is the siesta! I’m not at my best in the mid arvo, partly because my sensitive eyes get weary in our harsh sun. Around 4 or 5pm the world becomes a softer place to me, and my mojo returns. This song is also more generally about existential angst and depression. Always reach out for support when you need it – it is not easy dealing with these crazy grey soft things behind our eyes. I love Rob’s crazy guitars on the second half of this song – they are two full takes panned either side without edits.
  7. Kissing and Comedy was inspired by a quote from Tom Robbins’s novel Even Cowgirls Get The Blues: “Maybe the human animal has contributed really nothing to the universe but kissing and comedy–but by God that’s plenty.” I wrote it on the Overland Train. The drums are a single playful take by Angus Diggs, recorded in Dave Johnson’s bedroom with my trusty pair of ribbon mics, and the song was built up from there.
  8. Now That We’ve Kissed was co-written with Ivy Penny and is about being kissed by famous people (which I haven’t) and the implications of kisses in general. The things that “come from a kiss” were literally phoned in by friends.
  9. Rogue State was written in 2007, just prior to the Australian federal election and the Bali Climate Change Conference. It reflects on Australia’s sabotage of progress on climate change at the Kyoto conference in 1997, as documented in books such as Guy Pearse’s “High & Dry: John Howard, Climate Change and the Selling of Australia’s Future” and Clive Hamilton’s “Scorcher”. I had no intention of putting this old song on the album, until the last minute when I decided it was still sadly relevant given so many politicians still show a lack of respect and understanding of science and the planet that supports us. The recording of the song was also an excuse to feature a bit of Peter Grayling cello magic.
  10. Montreal was the first song I wrote on ukulele, though I ended up recording it with guitar. Sian Brown, who helped greatly with recording my vocals for the album, makes a harmony cameo at the end of the song. As for the lyrics, it’s a fairly obvious bittersweet love song.
  11. I Stood You Up is my account of attending a fantastic music camp called Rhythm Song, and kicking myself for not following through with a potential jam with songwriting legend Kristina Olsen. One of her pieces of advice to performers is to make their audience laugh to balance out the sadder songs in a set. The song was written in a mad rush two hours before a Song Club when I thought “What music can I write quickly?… Well I don’t have a blues song yet!”. This version was largely recorded prior to The Kiss List taking shape, so it features multiple guest musicians who are listed in the liner notes.
  12. Measuring the Clouds I wrote for my Dad’s birthday a few years ago. He used to be a Weather Observer in the 60s, sending up the big balloons etc. from many locations around WA such as Cocos Island. He had a beautiful eccentric sense of humour and would answer the phone with “Charlie’s Chook House and Chicken Factory, Chief Chook speaking”. The musical challenge I set myself with this song was to use a five bar pattern in the verse. A cello part was recorded, but was dropped in the mixing when I decided it made the song feel too heavy and I wanted it to feel light and airy.

Share


gitgraph.js and codepen.io for git visualization

Published 22 Mar 2018 by addshore in Addshore.

I was looking for a new tool for easily visualizing git branches and workflows to try and visually show how Gerrit works (in terms of git basics) to clear up some confusions. I spent a short while reading stackoverflow, although most of the suggestions weren’t really any good as I didn’t want to visualize a real repository, but a fake set of hypothetical branches and commits.

I was suggested Graphviz by a friend, and quickly found webgraphviz.com which was going in the right direction, but this would require me to learn how to write DOT graph files.

Eventually I found gitgraph.js, which is a small JavaScript library for visualizing branching ‘things’, such as git, well, mainly git, hence the name and produce graphics such as the one below.

In order to rapidly prototype with gitgraph I setup a blueprint codepen.io pen with the following HTML …

<html>
  <head>
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/gitgraph.js/1.11.4/gitgraph.css" />
    <script src="https://cdnjs.cloudflare.com/ajax/libs/gitgraph.js/1.11.4/gitgraph.min.js"></script>
  </head>
  <body><canvas id="graph"></canvas></body>
</html>

… and following JS …

var graph = new GitGraph({
  template: "metro", // or blackarrow
  orientation: "vertical",
  elementId: 'graph',
  mode: "extended", // or compact if you don't want the messages  
});

var master = graph.branch("master");
master.commit( { message: "Initial Commit" });

… to render the rather simple single commit branch below …

Styling can be adjusted passing a template into the GitGraph object …

var myTemplateConfig = {
    colors: ["#008fb5", "#979797", "#f1c109", "#33cc33"],
          branch: {
            lineWidth: 3,
            spacingX: 30,
            labelRotation: 0
          },
  commit: {
        spacingY: 40,
        dot: {
          size: 10
        },
        message: {
          displayAuthor: false,
          displayBranch: true,
          displayHash: true,
          font: "normal 14pt Arial"
        }
    }
    
};
var myTemplate = new GitGraph.Template( myTemplateConfig );

var graph = new GitGraph({
  template: "metro", // or blackarrow
  orientation: "vertical",
  elementId: 'graph',
  mode: "extended", // or compact if you don't want the messages  
  template: myTemplate
});

… which would render …

The blueprint codepen for this style can be found at https://codepen.io/addshore/pen/xWdZXQ.

With this blueprint setup I now have a starting point for further visualizations using gitgraph and codepen comparing Gerrit and Github, for example below comparing a merged pull request consisting of two commits, the second of which contains fixes for the first, vs a single change in gerrit, that has 2 seperate versions.

Keep an eye out on this blog for any more progress I make with this.

The post gitgraph.js and codepen.io for git visualization appeared first on Addshore.


How do I check how much memory a Mediawiki instance has available to it?

Published 21 Mar 2018 by user1258361 in Newest questions tagged mediawiki - Server Fault.

Before anyone posts something about checking php.ini, bear in mind there are all sorts of ways it could be overridden. Where's the admin page or panel that lists the amount of RAM available to mediawiki?

(Due diligence: Searches turned up nothing. Proof in links below)

https://www.google.com/search?q=mediawiki+admin+panel&ie=utf-8&oe=utf-8&client=firefox-b-1 only relevant link is https://www.mediawiki.org/wiki/Manual:System_administration which contains nothing about memory or RAM

https://www.google.com/search?q=mediawiki+admin+UI+how+much+memory+is+allocated&ie=utf-8&oe=utf-8&client=firefox-b-1 again nothing

https://www.google.com/search?q=mediawiki+how+to+check+how+much+memory+is+allocated&ie=utf-8&oe=utf-8&client=firefox-b-1 again, nothing. First link suggests increasing amount of RAM but that isn't useful if my php.ini is being ignored for unknown reasons


Who's a senior developer?

Published 21 Mar 2018 by in Posts on The bugalore.

Something at work today prompted me to get thinking about what people generally mean when they say they are/someone is a senior developer. There are some things which are a given - long-term technical experience, fairly good knowledge of complex languages and codebases, past experience working on products and so on. But in my opinion, there are a fair number of things which we don’t really talk about but are important skills a “senior” developer must possess to actually deserve that title.

Spike in Adam Conover Wikipedia page views | WikiWhat Epsiode 4

Published 21 Mar 2018 by addshore in Addshore.

This post relates to the WikiWhat Youtube video entitled “Adam Conover Does Not Like Fact Checking | WikiWhat Epsiode 4” by channel Cntrl+Alt+Delete. It would appear that the video went slightly viral over the past few days, so let’s take a quick look at the impact that had on the Wikipedia page view for Adam’s article.

The video was published back in January, and although the viewing metrics are behind closed doors this video has had a lot of activity in the past 5 days (judging by the comments).

It is currently the top viewed video in the WikiWhat series at 198,000 views where the other 3 videos (John Bradley, Kate Upton & Lawrence Gillard Jr.) only have 6000 views between them.

The sharp increase in video views translates rather well into Wikipedia page view for the Adam Conover article.

Generate at https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org&platform=all-access&agent=user&start=2018-02-28&end=2018-03-20&pages=Adam_Conover|Talk:Adam_Conover|User:Adam_Conover|User_talk:Adam_Conover

Interestingly this doesn’t just show a page view increase for the article, but also the talk page and Adam Conover’s user pages, all of which are shown in the video.

It’s a shame that 200,000 youtube views only translates to roughly 15,000 views on Wikipedia, but, still interesting to see the effect videos such as this can have for the visibility of the site.

You can watch the page views for an Wikipedia page using the Page views tool.

The post Spike in Adam Conover Wikipedia page views | WikiWhat Epsiode 4 appeared first on Addshore.


Cambridge Analytica, #DeleteFacebook, and adding EXIF data back to your photos

Published 20 Mar 2018 by addshore in Addshore.

Back in 2016 I wrote a short hacky script for taking HTML from facebook data downloads and adding any data possible back to the image files that also came with the download. I created this as I wanted to grab all of my photos from Facebook and be able to upload them to Google Photos and have Google automatically slot them into the correct place in the timeline. Recent news articles about Cambridge Analytica and harvesting of Facebook data have lead to many people deciding the leave the platform, so I decided to check back with my previous script and see if it still worked, and make it a little easier to use.

Step #1 – Move it to Github

Originally I hadn’t really planned on anyone else using the script, in fact I still don’t really plan on it. But let’s keep code in Github not on aging blog posts.

https://github.com/addshore/facebook-data-image-exif

Step #2 – Docker

The previous version of the script had hard coded paths, and required a user to modify the script, and also download things such as the ExifTool before it would work.

Now the Github repo contains a Dockerfile that can be used that includes the script and all necessary dependencies

If you have Docker installed running the script is now as simple as docker run --rm -it -v //path/to/facebook/export/photos/directory://input facebook-data-image-exif.

Step #3 – Update the script for the new format

As far as I know the format of the facebook data dump downloads is not documented anywhere. The format totally sucks, it would be quite nice to have some JSON included, or anything slightly more structured than HTML.

The new format moved the location of the HTML files for each photos album, but luckily the format of the HTML remained mostly the same (or at least the crappy parsing I created still worked).

The new data download did however do something odd with the image sources. Instead of loading them from the local directory (all of the data you have just downloaded) the srcs would still point to the facebook CDN. Not sure if this was intentional, but it’s rather crappy. I imagine if you delete your whole facebook account these static HTML files will actually stop working. Sounds like someone needs to write a little script for this…

Step #4 – Profit!

Well, no profit, but hopefully some people can make use of this again, especially those currently fleeing facebook.

You can find the “download a copy” of my data link at the bottom of your facebook settings.

I wonder if there are any public figures for the rate of facebook account deactivations and deletions…

The post Cambridge Analytica, #DeleteFacebook, and adding EXIF data back to your photos appeared first on Addshore.


Episode 4: Bernhard Krabina

Published 20 Mar 2018 by Yaron Koren in Between the Brackets: a MediaWiki Podcast.

Bernhard Krabina is a researcher and consultant for KDZ, the Centre for Public Administration Research, a Vienna, Austria-based nonprofit that focuses on improving and modernizing technology-based solutions in government at all levels within Europe. He has been involved with MediaWiki in government for the last 10 years.

Links for some of the topics discussed:


v2.4.7

Published 20 Mar 2018 by fabpot in Tags from Twig.


v1.35.3

Published 20 Mar 2018 by fabpot in Tags from Twig.


Facebook Inc. starts cannibalizing Facebook

Published 13 Mar 2018 by Carlos Fenollosa in Carlos Fenollosa — Blog.

Xataka is probably the biggest Spanish blogging company. I have always admired them, from my amateur perspective, for their ability to make a business out of writing blogs.

That is why, when they invited me to contribute with an article about the decline of Facebook, I couldn't refuse. Here it is.

Facebook se estanca, pero Zuckerberg tiene un plan: el porqué de las adquisiciones millonarias de WhatsApp e Instagram, or Facebook is stagnating, but Zuckerberg has a plan: the reason behind the billion dollar acquisitions of WhatsApp and Instagram.

Tags: facebook, internet, mobile

Comments? Tweet  


Vaporous perfection

Published 12 Mar 2018 by jenimcmillan in Jeni McMillan.

DSC_0405

Clouds, so impermanent, advise her that reality is a mere dream. The illusion of solidity in their shape and comforting forms is exactly that, illusion, disappearing as temperature changes, wind blows or night extinguishes day. Why would a cloud be other than this? I marvel at such simplicity. I will endeavour to leave clouds to their journey, not fall in love with them in any other way than to share their pleasure of being vaporous perfection.


new note (near -32.027, 115.794)

Published 12 Mar 2018 by Cailean MacLellan in OpenStreetMap Notes.

Comment

Created 3 months ago by Cailean MacLellan
Road name is not up to date. This is Lutey Road between Moreing Road and Waddell Road

Full note

Created 3 months ago by Cailean MacLellan
Road name is not up to date. This is Lutey Road between Moreing Road and Waddell Road

Budapest Blues

Published 11 Mar 2018 by jenimcmillan in Jeni McMillan.

budapest

It’s Sunday and I’m in the most beautiful city in the world.

Cigarette butts crushed into broken tiles.

At my feet is another death, in the street,

Broken buildings and hollow dreams.

I’m in her arms like a stillborn child.

Feeling nothing, it seems,

But old.


Episode 3: Mike Cariaso

Published 6 Mar 2018 by Yaron Koren in Between the Brackets: a MediaWiki Podcast.

Mike Cariaso is the co-founder of SNPedia, a MediaWiki-based repository of genomic information (founded in 2006), and the creator of Promethease, personal genetic analysis software that uses SNPedia's data.

Links for some of the topics discussed:


Self-hosted websites are doomed to die

Published 3 Mar 2018 by Sam Wilson in Sam's notebook.

I keep wanting to be able to recommend the ‘best’ way for people (who don’t like command lines) to get research stuff online. Is it Flickr, Zenodo, Internet Archive, Wikimedia, and Github? Or is it a shared hosting account on Dreamhost, running MediaWiki, WordPress, and Piwigo? I’d rather the latter! Is it really that hard to set up your own website? (I don’t think so, but I probably can’t see what I can’t see.)

Anyway, even if running your own website, one should still be putting stuff on Wikimedia projects. And even if not using it for everything, Flickr is a good place for photos (in Australia) because you can add them to the Australia in Pictures group and they’ll turn up in searches on Trove. The Internet Archive, even if not a primary and cited place for research materials, is a great place to upload wikis’ public page dumps. So it really seems that the remaining trouble with self-hosting websites is that they’re fragile and subject to complete loss if you abandon them (i.e. stop paying the bills).

My current mitigation to my own sites’ reliance on me is to create annual dumps in multiple formats, including uploading public stuff to IA, and printing some things, and burning all to Blu-ray discs that get stored in polypropylene sleeves in the dark in places I can forget to throw them out. (Of course, I deal in tiny amounts of data, and no video.)

What was it Robert Graves said in I, Claudius about the best way to ensure the survival of a document being to just leave it sitting on ones desk and not try at all to do anything special — because it’s all perfectly random anyway as to what persists, and we can not influence the universe in any meaningful way?


v2.4.6

Published 3 Mar 2018 by fabpot in Tags from Twig.


v1.35.2

Published 3 Mar 2018 by fabpot in Tags from Twig.


Untitled

Published 2 Mar 2018 by Sam Wilson in Sam's notebook.

I think I am learning to love paperbacks. (Am hiding in New Editions this morning.)


v2.4.5

Published 2 Mar 2018 by fabpot in Tags from Twig.


v1.35.1

Published 2 Mar 2018 by fabpot in Tags from Twig.


Weather Report

Published 1 Mar 2018 by jenimcmillan in Jeni McMillan.

DSC_0437

It is Minus 11 in Berlin.

Heart rate slow.

Breath freezing.

It’s Minus 12 in Berlin.

Heart is warming.

Breath responding.

I think of the Life, Death, Rebirth cycle.

Again and again and again.

DSC_0429

Thank you Clarissa Pinkola Estés.

 

 

 


Conference at UWA – Home 2018

Published 26 Feb 2018 by Tom Wilson in thomas m wilson.

I’ll be presenting a paper at the following conference in July 2018.  It will be looking at the theme of aspirations for home ownership from the perspective of Big History.  Hope to see you there.

Missing

Published 20 Feb 2018 by jenimcmillan in Jeni McMillan.

Trees

Sometimes I just miss people. I want to hold them in my arms and feel their heart beat. I want to look into their souls. Share stories. Linger in all the delicious ways. This isn’t lust. There are many ways to be in the world. Lust has its place. But the kind of desire I speak of is a love so deep that it may only last a second yet find perfection. The willingness to be absolutely present. This is not a contradiction. The longing is a sweetness, something that poetry holds hands with and prose takes a long walk through aimless streets.


Episode 2: Niklas Laxström

Published 20 Feb 2018 by Yaron Koren in Between the Brackets: a MediaWiki Podcast.

Niklas Laxström is the creator and co-maintainer of translatewiki.net, the site where MediaWiki and most of its extensions (along with other software, like OpenStreetMap) gets translated into hundreds of languages. Niklas also works for the Wikimedia Foundation as part of the Language team, where he helps to develop code related to translation and internationalization, most notably the Translate extension.

Links for some of the topics discussed:


Volunteer Spotlight: David Schokker

Published 20 Feb 2018 by Rebecca Waters in DDD Perth - Medium.

View from the DDD Perth 2017 Speaker and Sponsor party (David Schokker)

Volunteers are the lifeblood of DDD Perth. In order to pull off such a conference, we need volunteers on the ground before, during and after the big day. We simply couldn’t do it without them.

This week, I spent some time chatting with one of the many volunteers of DDD Perth, David Schokker.

battlepanda (@battlepanda_au) | Twitter

David, how did you come across DDD?

I was introduced to DDD by a fellow volunteer. As I have done other large scale events I felt that I could help with DDD and share my experience.

How did you help out on the day?

I was one of the event photographers, responsible for documenting the various interesting things that happen on the day. (Ed: you can check out photos from the day, taken by David and others, over on Flickr)

DDD Perth 2017

What was the most memorable part of your volunteering experience?

The overwhelming amount of appreciation not only myself but for the entire volunteer team. The personal gratitude is why I do events like this.

Would you recommend volunteering at DDD? Why?

Of course, the team is wonderful and diverse.
Not only do you get to help make an event such as DDD happen, but you also get a chance to mingle with some of the best people in their fields of expertise.

Did you meet and mingle with anyone that was particularly awesome?

Yeah meeting Kris (@web_goddess) was amazing, I got to hang out with her before her preso so it was unique to meet a new friend before seeing what they excel in. It really opened my eyes to how strong of a person she is and what great things she does with the community.

Will you be volunteering in 2018?

If you guys want me, of course!

David, I promise, we want you.


Volunteer Spotlight: David Schokker was originally published in DDD Perth on Medium, where people are continuing the conversation by highlighting and responding to this story.


How to use toggleToc() in a MediaWiki installation

Published 18 Feb 2018 by lucamauri in Newest questions tagged mediawiki - Webmasters Stack Exchange.

I admin a wiki site running MediaWiki 1.29 and I have a problem collapsing the TOC on pages.

I would be interesting in keeping the Contents box, but loading the page with it collapsed by default.

It appears there is a simple solution here https://www.mediawiki.org/wiki/Manual_talk:Table_of_contents#Improved_Solution, but I fail to implement it and I have no idea where the error is, hopefully someone can help.

I integrated the code as explained and checked that MediaWiki:Common.js is used by the site.

During page rendering, I checked the Java code is loaded and executed, but it appears to fail because

ReferenceError: toggleToc is not defined

I also checked this page https://www.mediawiki.org/wiki/ResourceLoader/Migration_guide_(users)#MediaWiki_1.29 , but in the table there is a empty cell where it should be explained how to migrate toggleToc();. I am not even entirely sure it should be migrated.

Any help on this topic will be appreciated.

Thanks

Luca


new note (near -32.017, 115.781)

Published 18 Feb 2018 by Theobaldo in OpenStreetMap Notes.

Comment

Created 4 months ago by Theobaldo
"barbecue" POI has no name POI types: tourism-picnic_site OSM data version: 2018-01-26T12:04:02Z #mapsme

Full note

Created 4 months ago by Theobaldo
"barbecue" POI has no name POI types: tourism-picnic_site OSM data version: 2018-01-26T12:04:02Z #mapsme

How to use mw.site.siteName in Module:Asbox

Published 17 Feb 2018 by Rob Kam in Newest questions tagged mediawiki - Webmasters Stack Exchange.

Exporting Template:Stub from Wikipedia for use on non-WMF wiki, it transcludes Scribunto Module:Asbox which has on line 233:

' is a [[Wikipedia:stub|stub]]. You can help Wikipedia by [',

Substituting Wikipedia with magic word {{SITENAME}} doesn't work here. How to replace Wikipedia for the comparable Lua function mw.site.siteName, so that pages transcluding the stub template shows the local wiki name instead?


Feel the love for digital archives!

Published 15 Feb 2018 by Jenny Mitcham in Digital Archiving at the University of York.

Yesterday was Valentine's Day.

I spent most of the day at work thinking about advocacy for digital preservation. I've been pretty quiet this month, beavering away at a document that I hope might help persuade senior management that digital preservation matters. That digital archives are important. That despite their many flaws and problems, we should look after them as best we can.

Yesterday I also read an inspiring blog post by William Kilbride: A foot in the door is worth two on the desk. So many helpful messages around digital preservation advocacy in here but what really stuck with me was this:

"Digital preservation is not about data loss, it’s about coming good on the digital promise. It’s not about the digital dark age, it’s about a better digital future."

Perhaps we should stop focusing on how flawed and fragile and vulnerable digital archives are, but instead celebrate all that is good about them! Let's feel the love for digital archives!

So whilst cycling home (in the rain) I started thinking about Valentine's cards that celebrate digital archives. Then with a glass of bubbly in one hand and a pen in the other I sketched out some ideas.


Let's celebrate that obsolete media that is still in good working
order (against all odds)

Even file migration can be romantic..

A card to celebrate all that is great about Broadcast
WAV format

Everybody loves a well-formed XML file

I couldn't resist creating one for all you PREMIS fans out there



I was also inspired by a Library of Congress blog post by Abbie Grotke that I keep going back to: Dear Husband: I’m So Sorry for Your Data Loss. I've used these fabulous 'data loss' cards several times over the years to help illustrate the point that we need to look after our digital stuff.



I'm happy for you to use these images if you think they might help with your own digital preservation advocacy. An acknowledgement is always appreciated!

I don't think I'll give up my day job just yet though...

Best get back to the more serious advocacy work I have to do today.





Feel the love for digital archives!

Published 15 Feb 2018 by Jenny Mitcham in Digital Archiving at the University of York.

Yesterday was Valentine's Day.

I spent most of the day at work thinking about advocacy for digital preservation. I've been pretty quiet this month, beavering away at a document that I hope might help persuade senior management that digital preservation matters. That digital archives are important. That despite their many flaws and problems, we should look after them as best we can.

Yesterday I also read an inspiring blog post by William Kilbride: A foot in the door is worth two on the desk. So many helpful messages around digital preservation advocacy in here but what really stuck with me was this:

"Digital preservation is not about data loss, it’s about coming good on the digital promise. It’s not about the digital dark age, it’s about a better digital future."

Perhaps we should stop focusing on how flawed and fragile and vulnerable digital archives are, but instead celebrate all that is good about them! Let's feel the love for digital archives!

So whilst cycling home (in the rain) I started thinking about Valentine's cards that celebrate digital archives. Then with a glass of bubbly in one hand and a pen in the other I sketched out some ideas.


Let's celebrate that obsolete media that is still in good working
order (against all odds)

Even file migration can be romantic..

A card to celebrate all that is great about Broadcast
WAV format

Everybody loves a well-formed XML file

I couldn't resist creating one for all you PREMIS fans out there



I was also inspired by a Library of Congress blog post by Abbie Grotke that I keep going back to: Dear Husband: I’m So Sorry for Your Data Loss. I've used these fabulous 'data loss' cards several times over the years to help illustrate the point that we need to look after our digital stuff.



I'm happy for you to use these images if you think they might help with your own digital preservation advocacy. An acknowledgement is always appreciated!

I don't think I'll give up my day job just yet though...

Best get back to the more serious advocacy work I have to do today.





Email is your electronic memory

Published 14 Feb 2018 by Bron Gondwana in FastMail blog.

From the CEO’s desk.

Sometimes you write planned blog posts, sometimes events in the news are a prompt to re-examine your values. This is one of those second times.

Gmail and AMP

Yesterday, Google announced that Gmail will use AMP to make emails dynamic, up-to-date and actionable. At first that sounds like a great idea. Last week’s news is stale. Last week’s special offer from your favourite shop might not be on sale any more. The email is worthless to you now. Imagine if it could stay up-to-date.

TechCrunch wrote about AMP in Gmail and then one of their columnists wrote a followup response about why it might not be a good idea – which led to a lot of discussion on Hacker News.

Devin used the word static. In the past I have used the word immutable. I think “immutable” is more precise, though maybe less plain and simple language than “static” – because I don’t really care about how dynamic and interactive email becomes – usability is great, I’m all in favour.

But unchanging-ness... that’s really important. In fact, it’s the key thing about email. It is the biggest thing that email has over social networking or any of the hosted chat systems.

An email which is just a wrapper for content pulled from a website is no longer an unchangeable copy of anything.

To be totally honest, email already has a problem with mutability – an email which is just a wrapper around remotely hosted images can already be created, though FastMail offers you the option of turning them off or restricting them to senders in your address book. Most sites and email clients offer an option to block remote images by default, both for privacy and because they can change after being delivered (even more specifically, an email with remote images can totally change after being content scanned).

Your own memory

The email in your mailbox is your copy of what was said, and nobody else can change it or make it go away. The fact that the content of an email can’t be edited is one of the best things about POP3 and IMAP email standards. I admit it annoyed me when I first ran into it – why can’t you just fix up a message in place – but the immutability is the real strength of email. You can safely forget the detail of something that you read in an email, knowing that when you go back to look at it, the information will be exactly the same.

Over time your mailbox becomes an extension of your memory – a trusted repository of history, in the way that an online news site will never be. Regardless of the underlying reasons, it is a fact that websites can be “corrected” after you read them, tweets can be deleted and posts taken down.

To be clear, often things are taken down or edited for good reasons. The problem is, you can read something online, forward somebody a link to it or just go back later to re-read it, and discover that the content has changed since you were last there. If you don’t have perfect memory (I sure don’t!) then you may not even be sure exactly what changed – just be left with a feeling that it’s not quite how you remember it.

Right now, email is not like that. Email is static, immutable, unchanging. That’s really important to me, and really important to FastMail. Our values are very clear – your data belongs to you, and we promise to be good stewards of your data.

I'm not going to promise that FastMail will “never implement AMP” because compatibility is also important to our users, but we will proceed cautiously and skeptically on any changes that allow emails to mutate after you’ve seen them.

An online datastore

Of course, we’re a hosted “cloud” service. If we turned bad, we could start silently changing your email. The best defence against any cloud service doing that is keeping your own copies, or at least digests of them.

Apart from trusting us, and our multiple replicas and backups of every email, we make it very easy to keep your own copies of messages:

  1. Full standards-compliant access to email. You can use IMAP or POP3 to download messages. IMAP provides the triple of “foldername / uidvalidity / uid” as a unique key for every message. Likewise we provide CalDAV and CardDAV access to the raw copies of all your calendars and contacts.

  2. Export in useful formats. Multiple formats for contacts. Standard ICS files for calendars and it’s rather hidden, but at the bottom of the Folders screen, there’s a link called “Mass delete or remove duplicates” and there’s a facility on that screen to download entire folders as a zip file as well.

  3. Working towards new standards for email. Our team is working hard on JMAP and will be participating in a hackathon at IETF in London in March to test interoperability with other implementations.

  4. We also provide a DIGEST.SHA1 non-standard fetch item via IMAP that allows you to fetch the SHA1 of any individual email. It’s not a standard though. We plan to offer something similar via JMAP, but for any attachment or sub-part of emails as well.

Your data, your choice

We strongly believe that our customers stay with us because we’re the best, not because it’s hard to leave. If for any reason you want to leave FastMail, we make it as easy as possible to migrate your email away. Because it’s all about trust – trust that we will keep your email confidential, trust that we will make your email easy to access, and trust that every email will be exactly the same, every time you come back to read it.

Thank you to our customers for choosing us, and staying with us. If you’re not our customer yet, please do grab yourself a free trial account and check out our product. Let us know via support or twitter, whether you decide to stay, and particuarly if you decide not to! The only thing we don’t want to hear is “it should be free” – we’re not interested in that discussion, we provide a good service and we proudly charge for it so that you are our customer, not our product.

And if you’re not ready to move all your email, you can get a lot of the same features for a whole group of people using Topicbox – a shared memory without having to change anything except the “To:” line in the emails you send!

Cheers,

Bron.


MySQL socket disappears

Published 9 Feb 2018 by A Child of God in Newest questions tagged mediawiki - Server Fault.

I am running Ubuntu 16.04 LTS, with MySQL server for MediaWiki 1.30.0 along with Apache2 and PHP7.0. The installation was successful for everything, I managed to get it all running. Then I start installing extensions for MediaWiki. Everything is fine until I install the Virtual Editor extension. It requires that I have both Parsoid and RESTBase installed. So I install those along side Virtual Editor. Then I go to check my wiki and see this message (database name for the wiki is "bible"):

Sorry! This site is experiencing technical difficulties.

Try waiting a few minutes and reloading.

(Cannot access the database: Unknown database 'bible' (localhost))

Backtrace:

#0 /var/www/html/w/includes/libs/rdbms/loadbalancer/LoadBalancer.php(1028): Wikimedia\Rdbms\Database->reportConnectionError('Unknown databas...')

#1 /var/www/html/w/includes/libs/rdbms/loadbalancer/LoadBalancer.php(670): Wikimedia\Rdbms\LoadBalancer->reportConnectionError()

#2 /var/www/html/w/includes/GlobalFunctions.php(2858): Wikimedia\Rdbms\LoadBalancer->getConnection(0, Array, false)

#3 /var/www/html/w/includes/user/User.php(493): wfGetDB(-1)

#4 /var/www/html/w/includes/libs/objectcache/WANObjectCache.php(892): User->{closure}(false, 3600, Array, NULL)

#5 /var/www/html/w/includes/libs/objectcache/WANObjectCache.php(1012): WANObjectCache->{closure}(false, 3600, Array, NULL)

#6 /var/www/html/w/includes/libs/objectcache/WANObjectCache.php(897): WANObjectCache->doGetWithSetCallback('global:user:id:...', 3600, Object(Closure), Array, NULL)

#7 /var/www/html/w/includes/user/User.php(520): WANObjectCache->getWithSetCallback('global:user:id:...', 3600, Object(Closure), Array)

#8 /var/www/html/w/includes/user/User.php(441): User->loadFromCache()

#9 /var/www/html/w/includes/user/User.php(405): User->loadFromId(0)

#10 /var/www/html/w/includes/session/UserInfo.php(88): User->load()

#11 /var/www/html/w/includes/session/CookieSessionProvider.php(119): MediaWiki\Session\UserInfo::newFromId('1')

#12 /var/www/html/w/includes/session/SessionManager.php(487): MediaWiki\Session\CookieSessionProvider->provideSessionInfo(Object(WebRequest))

#13 /var/www/html/w/includes/session/SessionManager.php(190): MediaWiki\Session\SessionManager->getSessionInfoForRequest(Object(WebRequest))

#14 /var/www/html/w/includes/WebRequest.php(735): MediaWiki\Session\SessionManager->getSessionForRequest(Object(WebRequest))

#15 /var/www/html/w/includes/session/SessionManager.php(129): WebRequest->getSession()

#16 /var/www/html/w/includes/Setup.php(762): MediaWiki\Session\SessionManager::getGlobalSession()

#17 /var/www/html/w/includes/WebStart.php(114): require_once('/var/www/html/w...')

#18 /var/www/html/w/index.php(40): require('/var/www/html/w...')

#19 {main}

I checked the error logs in MySQL, and the error message said that the database was trying to be accessed without a password. I restarted my computer and restarted Apache, Parsoid, RESTBase, and MySQL. But I could not successfully restart MySQL. The error log by typing the command journalctl -xe and saw that it failed to start because it couldn't write to /var/lib/mysql/. I went to StackExchange to see if I could a solution, and one answer said to use the command mysql -u root -p. I did and typed in the password and it gave this error:

ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)

I also check the status of it by typing sudo mysqladmin status which said:

mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!

I wanted to verify that it existed, but upon browsing to the location of the socket, I found it was not there. I saw an answer about a missing MySQL socket which said to use the touch command to create the socket and another file. I did it and still had the same issues. I went back to the directory and found the two files to be missing. So I created them again with the touch command and watched the folder to see what happens. After about half a minute, the folder seems to be deleted and recreated. I get kicked out of the folder into it's parent directory and when I go back in the files are gone.

Does anybody know why this is happening, or at least how I can fix this and get MySQL back up and running?


Global Diversity Call for Proposals Day

Published 7 Feb 2018 by Rebecca Waters in DDD Perth - Medium.

Photo by Hack Capital on Unsplash

February 3rd, 2018: Global Diversity Call for Proposals (CFP) Day. Around the globe, over 50 cities across 23 countries participated by running CFP workshops.

The workshops were aimed at first time would-be speakers, from any field (technology focus not required). Mentors were available to help with proposals, provide speaking advice and share their enthusiasm to get newcomers up on stage.

Workshops were held in Brisbane, Melbourne, Perth and Sydney in Australia, by some of the most vocal supporters of diversity in technology in the country.

In Perth, Fenders and DDD Perth run proposal writing workshops to help reduce the barrier to submitting, and so it made sense for us to join in this February fun and encourage a whole new group of conference potentials to get up and share their knowledge!

The workshop in Perth was well attended with participants from different backgrounds, both personally and professionally, coming together to work on their proposals. Mentors from Fenders and DDD Perth brought their children down and the entire building at Meerkats was filled with excitement (and snacks!).