Categories
Blog

Synchronise remote WordPress database to local Vagrant

When working on a local copy of an existing WordPress site it’s useful to synchronise the latest content from the live server.

One way is to download the database as an SQL file, convert it for the local URL then import it into the local server – Vagrant in this instance.

After having done this process manually for a particular long-running client a couple of times I’ve finally done the obvious thing & written a bash script to automate it.

Simplifying ssh

I already have an alias to the remote server set up in ~/.ssh/config.

# Add an alias "example" for example.com
Host example
    HostName example.com
    User myUserName
    Port 7777
    IdentityFile ~/.ssh/id_rsa.example.com

This allows me to connect to the server with ssh example which makes the script a bit less verbose.

The steps

  1. Export the remote database
  2. Download the SQL file
  3. Find & replace the remote URL with the local one
  4. Replace the old local database with the new one

1: Export the remote database

I made a server-side script (dump-db.sh), using mysqldump to save a snapshot of the database as ~/db-dump.sql:

mysqldump -u db_username -pPASSWORD wp_databasename > ~/db-dump.sql

Running this remotely via ssh means the local script doesn’t need to know the database password:

ssh example '~/dump-db.sh'

2: Download the SQL file

scp is a quick way to transfer files over ssh:

scp example:~/db-dump.sql ~/db-dump.sql

3: Find & replace the remote URL with the local one

In order for the remote example.com database to work on my local example.test server I have to find & replace all instances of example.com with example.test. I use sed for this, with the -i (in-place) option to overwrite the existing file with the updated version:

sed -i 's/example.com/example.test/g' ~/db-dump.sql

4: Replace the old local database with the new one

Vagrant provides an import script which imports any SQL files in the /backups directory into a database with the same name as the file. First I move the updated dump file and rename it:

mv ~/db-dump.sql ~/vagrant-local/database/sql/backups/wp-db.sql

The script won’t overwrite existing databases so next I need to drop the old local database. This can be done with vagrant ssh, passing a command with the -c (command) option.

The drop command itself uses mysql’s -e (execute) option to run the SQL command DROP database wp_db. The command will be running in Vagrant’s virtual machine so it doesn’t require a username & password:

pushd ~/vagrant-local/
vagrant ssh -c 'mysql -e "DROP database wp_databasename"'

Note: To run vagrant ssh successfully in the script I had to first switch to the /vagrant-local directory. I used pushd for this so that I can popd back to the previous location after, but pushd initially failed with the error message pushd: not found. I fixed this by adding the #!/bin/bash header to the top of the script.

With the old database dropped, Vagrant’s import-sql.sh script will import the SQL dump file into a datbase named wp-db. The script is run using vagrant ssh with -c again:

vagrant ssh -c '/srv/database/import-sql.sh'
popd

That’s it!

The full script

I like to echo reassuring messages in my scripts so that future me knows what’s actually going on!

#!/bin/bash

echo
echo ------------------------------
echo Dump database on remote server
echo ------------------------------
echo

# Run database dump script on remote server
ssh example '~/db-dump.sql'

echo Done ✔

echo
echo ------------------------------
echo Download remote dump file
echo ------------------------------
echo

# Download database
scp example:~/db-dump.sql ~/db-dump.sql

echo Done ✔

echo
echo ------------------------------
echo Localise downloaded dump file
echo ------------------------------
echo

# Find & replace `example.com` with `example.test`
sed -i 's/example.com/example.test/g' ~/db-dump.sql

# Move & rename the updated sql file
mv ~/db-dump.sql ~/vagrant-local/database/sql/backups/wp-db.sql

echo Done ✔

echo
echo ------------------------------
echo Run vagrant import script
echo ------------------------------
echo 

pushd ~/vagrant-local/

# Drop old vagrant database
vagrant ssh -c 'mysql -e "DROP database wp_db"'

# Run import script
vagrant ssh -c '/srv/database/import-sql.sh'

popd

echo Done ✔

echo
echo ------------------------------
echo Synchronisation complete!
echo ------------------------------

Post-script

After finishing writing this post I ran a final test of my script.

It appeared to work perfectly but, when I tried to view the local site on Vagrant all I saw was the dreaded White Screen of Death!

I obviously attributed this to the script I had just run, but after a day of anxious troubleshooting it turned out to be a badly timed coincidence caused by something else I had changed… *sigh*

Categories
Blog

JavaScript Concurrency

I recently completed the excellent JavaScript Concurrency course on executeprogram.com. Here’s a quick summary of Promises and async/await in JavaScript.

What are Promises?

Promises are a feature of JavaScript that allows our code to cope with things that might not respond immediately. A classic example of this is waiting for a response from a server, where we don’t know when the data will be returned, or if it will be returned at all.

Handling Promises

Promises act as a wrapper around another value, which could be any valid JavaScript variable. To access the value contained in a Promises we can use its then() method.

then accepts a function as its single argument. For brevity this is usually passed as an arrow function, for example promise.then(v => console.log(v))

When then is executed. the value contained in the Promise is passed into the arrow function and replaced by the result of that function. In the following example we use then to multiply a Promise’s value by 2:

const p = Promise.resolve(4);
p.then( v => v * 2 );
p.then( v => console.log(v) );//4

Note that then calls always returns a promise which means they can be chained.

Catching errors

If an error is thrown inside a promise it will reject. Its status will change to rejected and the error will be passed on to the parent scope.

Promise.resolve()
  .then(() => {
    throw new Error('oops');
  });

// Result:
// Uncaught (in promise) Error: oops
// { <state>: rejected, <reason>: Error }

Promises have a method catch which can be used to intervene, allowing the promise to fulfill with a specified value rather than rejecting.

Promise.resolve()
  .then(() => {
    throw new Error('Reject!');
  })
  .catch(() => 'An error was caught');

// Result:
// { <state>: "fulfilled", <value>: 'An error was caught' }

new Promise & resolve

The new Promise() constructor accepts a function as a parameter, which is executed immediately. This function can itself accept one or two parameters which, when executed by the Promise constructor, will contain the internal resolve & reject methods of the new Promise.

const p = new Promise( 
   resolve => resolve(4)
);

p.then( v => console.log( v ) );

// Result: 4
const p = new Promise( 
   (resolve,reject) => reject('oops')
);

// Result: 
// Uncaught (in promise) oops

We can use resolve as a way to play with Promises using setTimeout:

let resolver;
const p = new Promise( resolve => resolver = resolve )
p.then( v => console.log( v ));
setTimeout( () => resolver('hey'), 1000 );
  
// Result after 1 second:
// 'hey'

Or more succinctly:

function inOneSecond( func ){
   new Promise( resolve => {
     setTimeout( () => resolve( func() ), 1000 );
   });
}

inOneSecond((()=>console.log('hey'));

// Result after 1 second: 
// 'hey'

What is async/await?

async/await is a pair of JavaScript keywords to simplify working with Promises. The async keyword is used before the function keyword when defining a function. Functions defined in this way will always return a Promise.

await is used inside an async function. It’s prepended to function calls to make the function return a Promise and execute asynchronously.

function inOneSecond( func ) => {
   new Promise( resolve => {
     setTimeout( () => resolve( func() ), 1000 );
   });
}

async function chain() {
   await inOneSecond(()=>console.log('1 second'));
   await inOneSecond(()=>console.log('2 seconds'));
}

chain();

// Result
// After 1 second: '1 second'
// After 2 seconds: '2 seconds'

Note: This example updates the inOneSecond function to return the promise it creates

async/await with arrow functions

The same example using arrow functions:

const inOneSecond = ( func ) => new Promise( 
   resolve => {
      setTimeout( () => resolve( func() ), 1000 );
   }
);

const chain = async () => {
   await inOneSecond(()=>console.log('1 second'));
   await inOneSecond(()=>console.log('2 seconds'));
}

chain();

// Result
// After 1 second: '1 second'
// After 2 seconds: '2 seconds'

Other methods of Promise

Promise.all()

  • Takes an array of promises
  • Collects their fulfilled values into an array
  • Returns a promise fulfilled with that array
  • Immediately rejects if any of the promises reject
Promise.all([
   Promise.resolve('Resolved promise 1'),
   Promise.resolve('Resolved promise 2'),
   Promise.resolve('Resolved promise 3')
]).then(
   v => console.log( v )
);

// Result: Array(3)
// 0: "Resolved promise 1"
// 1: "Resolved promise 2"
// 2: "Resolved promise 3"

Promise.allSettled()

  • Takes an array of promises
  • Collects their fulfilled or rejected values
  • Returns a promise fulfilled with an array of objects
  • Each array object has a status and a value key:
    • For fulfilled promises, the status is fulfilled
    • For rejected promises, the status is rejected
Promise.allSettled([
   Promise.resolve('Resolved promise 1'),
   Promise.reject('Rejected promise 1'),
   Promise.resolve('Resolved promise 2'),
   Promise.reject('Rejected promise 2')
]).then(
   v => console.log( v )
);

// Result: Array(4)
// 0: Object { status: "fulfilled", value: "Resolved promise 1" }
// 1: Object { status: "rejected", reason: "Rejected promise 1" }
// 2: Object { status: "fulfilled", value: "Resolved promise 2" }
// 3: Object { status: "rejected", reason: "Rejected promise 2" }

An example using the Fetch API

fetch( path )
   .then(function (response) {

      // The API call was successful!
      if (response.ok) { return response.json() }

      // There was an error
      return Promise.reject(response);

   })

   .then(function (data) {

      if ( !data.avatar ) return;

      // Add logged-in class to body
      document.body.classList.add('logged-in');

   })

   .catch(function (err) {
      // There was an error
      console.warn('Something went wrong', err);
   });
Categories
Blog

`zsh_stats`

I’ve been using Zsh + Oh My Zsh for all my command line needs for some time, so I was delighted to discover Oh My Zsh’s zsh_stats command.

It gives a nice summary of the top 20 most frequently used shell commands. Here’s my top 20:

 1    2235  20.3998%   g
 2    1388  12.6689%   l
 3    787   7.18328%   npm
 4    747   6.81818%   cd
 5    391   3.56882%   vi
 6    305   2.78386%   t
 7    294   2.68346%   cat
 8    257   2.34575%   sudo
 9    247   2.25447%   which
10    214   1.95327%   git
11    152   1.38737%   ssh
12    143   1.30522%   vagrant
13    137   1.25046%   rm
14    127   1.15918%   man
15    121   1.10442%   node
16    104   0.949252%  todo
17    102   0.930997%  mv
18    93    0.84885%   grep
19    93    0.84885%   docker
20    92    0.839723%  pwd

Notes:

  • g : my alias for git
  • l : my alias for ls -al
  • t : my alias for todo.sh
  • todo : my alias for t due 7
  • cd is (mostly) redundant as Oh My Zsh doesn’t require it
  • git is also redundant as I should be using g

What are your top 20 zsh commands? Or perhaps you have already stopped reading?!

Update 03/07/21

     1	3093  26.884%    g
     2	1119  9.72621%   l
     3	648   5.63233%   npm
     4	342   2.97262%   cd
     5	312   2.71186%   git
     6	308   2.6771%    cat
     7	302   2.62495%   vi
     8	243   2.11213%   fff
     9	240   2.08605%   sudo
    10	233   2.02521%   which
    11	226   1.96436%   t
    12	206   1.79053%   rm
    13	155   1.34724%   node
    14	147   1.27771%   mv
    15	126   1.09518%   man
    16	112   0.97349%   grep
    17	109   0.947414%  vagrant
    18	102   0.886571%  docker
    19	92    0.799652%  cp
    20	80    0.69535%   -

Notes:

Update 27/01/22

     1	2630  26.112%    g
     2	736   7.30739%   l
     3	488   4.84512%   git
     4	413   4.10048%   vi
     5	354   3.51469%   npm
     6	324   3.21684%   cat
     7	286   2.83956%   cd
     8	277   2.7502%    sudo
     9	234   2.32327%   which
    10	212   2.10485%   rm
    11	204   2.02542%   t
    12	166   1.64813%   grep
    13	159   1.57863%   mv
    14	152   1.50913%   man
    15	113   1.12192%   node
    16	109   1.08221%   vagrant
    17	90    0.893566%  ln
    18	89    0.883638%  cp
    19	81    0.80421%   echo
    20	78    0.774424%  spd-say

Notes

  • I’m still accidentally using cd but less often
  • New entry spd-sav is a handy one for quick text-to-speech

Categories
Blog

Agreeable smiley

After a friend described yesterday’s effort as “…terrifying… like being at a rave with a disintegrating Edvard Munch” I decided to make a more agreeable version

See the Pen Zdog agreeable smiley by Dan Farrow (@squarebracket) on CodePen.

Categories
Blog

Zdog animation fun

I’ve been having fun trying out David DeSandro‘s ace javascript pseudo-3D animation library Zdog.

<canvas style="display: block; margin: 40px auto;" class="zdog-canvas" width="480" height="480"></canvas>
 
			// create illo
			const illo = new Zdog.Illustration({
			  // set canvas with selector
			  element: '.zdog-canvas',
			  scale: 2.5,
			});

			const faceGroup = new Zdog.Group({
  				addTo: illo
			});

			// face
			const face = new Zdog.Hemisphere({
			  addTo: faceGroup,
			  diameter: 180,
			  color: '#ee3',
			  fill: true,
			});

         const face2 = new Zdog.Hemisphere({
           addTo: faceGroup,
           diameter: 180,
           scale: { z: -1 },
           color: '#ff5',
           fill: true,
         });

			// left eye
			const eyeLeft = new Zdog.Ellipse({
			  addTo: faceGroup,
			  diameter: 30,
			  translate: { x: 40, y: -10, z: 50 },
			  rotate: { y: -0.5 },
			  scale: { x: 0.65 },
			  color: '#333',
			  fill: true,
			});

			// right eye
			const eyeRight = eyeLeft.copy({
			  translate: { x: -40, y: -10, z: 50 },
			  rotate: { y: 0.5 },
			});

			// mouth
			const mouth = eyeLeft.copy({
			  diameter: 30,
			  // stroke: 10,
			  translate: { y: 40, z: 40 },
			  rotate: { x: -0.5 },
			  scale: { x: 3 },
			  color: '#333',
           // fill: false,
			});

			// update & render
			// illo.updateRenderGraph();

			let i = 0.1,
				y = 0.005;

			function animate() {
			  // rotate illo each frame
			  // illo.rotate.x = Math.sin(i) * 4.5;
			  i += 0.05;
			  illo.rotate.y = Math.sin(i);
			  illo.rotate.z = Math.cos(i) * 0.5;
			  illo.rotate.x = Math.cos(y+=0.005) * 1.2;
			  illo.updateRenderGraph();
			  mouth.scale.y = Math.cos(i);
			  // animate next frame
			  requestAnimationFrame( animate );
			}

			// start animation
			animate();

I was thinking of those transparent bouncy balls with things inside them.

Also I used CodePen’s prefill embed feature to embed the pen here and it worked a treat!

Categories
Blog

Fun fact

CERN’s 30th anniversary of the World Wide Web was held on my 48th birthday. The web has been around for my entire adult life, to the day!

Categories
Blog

How to apply a negative timing offset to Zoom VTT captions

This seems like it must be a fairly common captioning requirement:

My sister works for a charity and ran an online seminar (I refuse to call it a webinar!) using Zoom. She had made captions for the 90 minute event by taking machine generated captions and painstakingly correcting them. She planned to put the video & captions up on the charity’s YouTube channel.

The problem

The first 40 minutes of the video were not useful and she wanted to trim them. This is easy in YouTube Studio but she found that the captions she had uploaded alongside the video didn’t get trimmed, so all the timings were out of sync.

It seems like a real oversight that YouTube Studio doesn’t do this automatically, but it’s in beta so maybe that feature is coming soon. For now I had to help my sister fix the problem she had spent several hours working on with no progress.

She sent me the VTT caption file which is a text file that looks like this:

1
00:08:53.340 --> 00:08:54.420
Hi, can you hear me.

2
00:09:17.550 --> 00:09:18.810
Yes, I can hear you.

3
etc...

The solution

I searched DuckDuckGo and found closedcaptionconverter.com, a free online tool for working with captions. It can apply timing offsets so it seemed like the perfect solution… until I tried to apply a negative offset which it didn’t seem to support.

My workaround was to apply a positive offset of 60 minutes minus our required negative offset, and then fix the hours using find & replace.

The first caption after the video was edited needed to started at about 00:00:10.000 but in the caption list it started at 00:35:55.140, so I offset the captions by 00:24:04.860.

Once the captions were offset I fixed the hours by doing a find & replace with regular expression in Sublime Text. I searched for ^01: using the regex symbol ^ to only match characters at the start of a line. I replaced ^01: with 00:, then replaced ^02: with 01:

After that I also had to replace --> 01 with --> 00, and -->02 with --> 01 I sent it back to my sister and… it worked!

End result

One very happy & relieved sister!

Categories
Blog

htaccess debugging workflow

Writing complex mod_rewrite redirection rules with .htacess can lead to a downwards spiral of frustration, paranoia and utter bewilderment.

Today I’m working on a particularly tricky scenario so I decided to get more methodical about it.

Here’s a summary of the workflow I’ve been using. I have to remind myself to be deliberately slow and methodical – if I get impatient and rush ahead I’m much more likely to miss something unexpected and end up going in circles.

My setup uses a locally installed Apache server and the Firefox Selenium IDE plugin:

  • Make a new directory in the server root, to keep everything in one place, with an index.php echoing some debugging info:
    [pastacode lang=”php” manual=”%3C%3Fphp%0AFile%3A%20%3C%3F%3D%20__FILE__%20%3F%3E%0AScript%20name%3A%20%3C%3F%3D%20%24_SERVER%5B%20’SCRIPT_NAME’%20%5D%20%3F%3E%0ARequest%20URI%3A%20%3C%3F%3D%20%24_SERVER%5B%20’REQUEST_URI’%20%5D%20%3F%3E%0AQuery%20string%3A%20%3C%3F%3D%20%24_SERVER%5B%20’QUERY_STRING’%20%5D%20%3F%3E%0ACookie%20check%3A%20%3C%3F%3D%20%24_COOKIE%5B%20’myCookie’%20%5D%20%3F%3E” message=”” highlight=”” provider=”manual”/]
  • Break the task up into a list of manageable blocks e.g.
    1. Set a cookie
    2. Append flag to URL if cookie is present
    3. If flag present remove it and set query string
  • Make directories named after each block e.g.
    • /set-a-cookie
    • /append-flag-to-url-if-cookie-is-present
    • /if-flag-present-remove-it-and-set-query-string
  • Each directory contains an .htaccess file and index.php
  • Put a comment at the top of each .htaccess identifying the block it addresses e.g. # set-a-cookie. That way when you start copying & pasting you can keep track of what each block does
  • The index file simply pulls in the root index.php:
    [pastacode lang=”php” manual=”%3C%3Fphp%20require(%20′..%2Findex.php’%20)%3B” message=”” highlight=”” provider=”manual”/]
  • Make a new Selenium test suite and begin constructing a test case for the first block. These tests are generally pretty simple: open a URL and check that the location is redirected as expected.
  • Remember to test for multiple scenarios such as requests for:
    • with trailing slashes
    • without trailing slashes
    • with query strings
    • index.php
    • static files
  • Start working on the .htaccess file, running the tests each time you make a change. Once a block is working move to the next one
  • Repeat until done

Work slowly and try not to make too many changes at once. mod_rewrite may seem like voodoo but don’t let it give you the heebie-jeebies!

Categories
Blog

How to avoid accidentally posting test content on live sites

It’s quite unprofessional, let alone embarrassing, to post some test content on a development site only to discover you’re actually in the wrong tab and you just posted to the live site.

To avoid this I use the Stylish Add-on for Firefox to inject custom CSS into live sites I’m working on.

For example, I have the following rule for touretteshero.com:

[pastacode lang=”css” manual=”%40-moz-document%20domain(%22www.touretteshero.com%22)%20%7B%0A%20%20%20html%20%7B%0A%20%20%20%20%20%20border-left%3A%2030px%20solid%20red%20!important%3B%20%0A%20%20%20%7D%0A%7D” message=”” highlight=”” provider=”manual”/]

The result is that, in my browser, the live site has a hard-to-ignore red border that acts as a clear visual reminder:

Screenshot of tourettershero.com website
Caution: Live site!

Update

Since writing this I’m working with four distinct versions of the site: local development, remote development, remote live and remote legacy (backup of the previous version).

I’ve refined my Stylish rules accordingly to add a custom footer for each. The new rule for the live site is:

[pastacode lang=”css” manual=”%40namespace%20url(http%3A%2F%2Fwww.w3.org%2F1999%2Fxhtml)%3B%0A%0A%40-moz-document%20domain(%22www.touretteshero.com%22)%20%7B%0A%20%20body%3A%3Aafter%20%7B%0A%20%20%20%20z-index%3A%2099999%3B%0A%20%20%20%20line-height%3A%2040px%3B%0A%20%20%20%20text-align%3A%20center%3B%0A%20%20%20%20width%3A%20100%25%3B%0A%20%20%20%20font-size%3A%2016px%3B%0A%20%20%20%20color%3A%20%23fff%3B%0A%20%20%20%20background%3A%20%23800%3B%0A%20%20%20%20content%3A%20%22live%22%3B%0A%20%20%20%20left%3A%200%3B%0A%20%20%20%20position%3A%20fixed%3B%0A%20%20%20%20bottom%3A%200%3B%0A%20%7D%0A%7D” message=”” highlight=”” provider=”manual”/]

which looks like this:

thero-stylish

Whereas the local development version looks like this:

thero.dev

Red = Caution, Green = Go!

 

 

Categories
Blog

Firefox Metabookmarks 3

More progress this week: Metabookmarks is now on github!

I’ve sidelined the custom protocol for now and I’m planning to use a chrome:// content link instead, specified in the chrome.manifest file. This allows arbitrary content files to be bundled inside the plugin file and accessed via chrome:// URLs, and hopefully will get me well on the way.

Using cfx means the content directory and the chrome.manifest files have to be manually added to the plugin.xpi archive. This gets tedious quickly so I wrote a build script to automate the process.

The build script also uses the excellent Extension Auto-Installer add-on to ‘push’ the update to Firefox immediately.

So, my pencils are nice & sharp, the desk is tidy and I’m ready to start some serious hacking!