AWS RDS MySQL migration with replication

Amazon Web Services makes it quite easy to replicate MySQL instances running on their RDS service - but sometimes, you’ll still have to do things manually.

Migrating an unencrypted RDS instance into one that uses encryption (or vice versa) is such a case, because you (currently) cannot use an unencrypted snapshot to create a new encrypted instance and you can’t create an encrypted read replica of an unencrypted database.

Migrating an instance into the Frankfurt (eu-central-1) region is another example, where RDS currently won’t help you as you (currently) can neither copy snapshots into the Frankfurt region, nor create a cross-region read replica there (As of 2015-09-21 cross region read-replicas are supported in eu-central-1).

This post explains how to migrate using MySQL replication.

Disclaimer: This post is still a work in progress, I published it regardless to force myself to actually finish and polish it. So until you see this warning here, you should take everything with a grain of salt… well, you should do that anyway with everything you hear or read. ;-)


Overview

We want to migrate from a source RDS instance to our target instance. You should definitely practice this process if you are trying to migrate a production database without downtime.

To do this, we’ll have to go through the following steps:

  1. Create a read replica of the source instance (in the same region as the source instance).
  2. Use the read replica to extract a full dump of the source database (without putting heavy load on the source instance).
  3. Spin up a new RDS instance as the target instance.
  4. Import the dump from the source database.
  5. Set up manual replication between the new target instance and the source instance.
  6. Wait for replication to catch up.

Once replication finally catched up (and you’ve reconfigured your application to use the new instance), we’ll do some cleanup and get rid of the read replica we’ve created in step 1. as well as the source instance.

1. Create a read replica of your source database

You can do that via the aws cli or the web console.

2. Extract a database dump from the read replica

First, you’ll have to stop replication on the read replica by running the following SQL statement:

CALL mysql.rds_stop_replication;

Now we’ll have to figure out at which position we’ll later have to continue replication, once we’ve imported the dump again with the following SQL statement:

SHOW SLAVE STATUS;

Write the following values down, you will need them later:

  • Master_Log_File
  • Read_Master_Log_Pos

Next, prepare the database dump:

echo "SET autocommit=0;
SET unique_checks=0;
SET foreign_key_checks=0;" > source_dump.sql

Then create the actual database dump with mysqldump (it’s highly recommended to run your dump in a screen session; depending on your data, you may have to set --max_allowed_packet accordingly):

mysqldump -h read-replica-of-source-db.region.rds.amazonaws.com -u username -p password -P 3306 --quick --databases db1 db2 db3 --routines --triggers >> source_dump.sql; echo "COMMIT;" >> source_dump.sql

Creating the dump will take quite a while. Good night and see you tomorrow. ;-)

3. Start a new RDS instance as your target instance

You can do that via the aws cli or the web console.

4. Import your dump into the new target instance

To speed up the import a bit, you should set innodb_flush_log_at_trx_commit = 2 for the target instance.

Once that is done, it is time for importing the dump into the new instance (it’s highly recommended to run your import in a screen session; depending on your data, you may have to set --max_allowed_packet accordingly):

mysql -h target-db.region.rds.amazonaws.com -u username -p password < source_dump.sql

This will again take quite a while.

5. Set up manual replication between the new target instance and the source instance.

To set up replication, you’ll first have to create a replication user on the source instance:

GRANT REPLICATION SLAVE on *.* to 'username'@'%' identified by 'password';

Then head over to the target instance and configure replication (you’ll now need the values for Master_Log_File and Read_Master_Log_Pos that you wrote down earlier):

CALL mysql.rds_set_external_master (
    'source-db.region.rds.amazonaws.com',
    3306,
    'username',
    'password',
    'Master_Log_File',
    Read_Master_Log_Pos,
    1
);

Now, we’ll start replication by running the following SQL statement on the target instance:

CALL mysql.rds_start_replication;

You can check if replication is actually running with SHOW SLAVE STATUS; on the target instance (slave_io_running and slave_sql_running should both be YES).

6. Wait for replication to catch up.

Once replication catched up, you should set innodb_flush_log_at_trx_commit back to 1 again.

You’ll probably also want to migrate your db users from the source instance. You can get a listing of them on the source instance with:

SELECT user, host FROM user;

Next run the following statement for every user you want to migrate on the source instance:

SHOW GRANTS FOR 'username'@'%';

Then run the output of those queries on the target instance to create the users there and run FLUSH PRIVILEGES; after that.

Final steps and clean up

  • Now you should set your application into read-only mode or take it down for a little while (e.g. if you use auto_increment you don’t want duplicates because you already had writes into the new target instance, but are still replicating writes from the source instance with the same record id).
  • Configure your application to use the new target instance.
  • Make sure that replication fully catched up with SHOW SLAVE STATUS; on the target instance.
  • Stop replication by calling CALL mysql.rds_stop_replication; on the target instance.
  • Remove the replication configuration by calling CALL mysql.rds_reset_external_master; on the target instance.

After you’ve verified that everything is as it should be:

  • Bring your application up again or disable it’s read-only mode.
  • Terminate the read replica you used for dumping.
  • Terminate the source instance.

Disclaimer: This post is still a work in progress, I published it regardless to force myself to actually finish and polish it. So until you see this warning here, you should take everything with a grain of salt… well, you should do that anyway with everything you hear or read. ;-)


Any comments? Ping me on Twitter. 👉🏻 Get my newsletter for occasional updates. ✌🏻

ec2dns 2.0: DNS for EC2 instances

This post originally appeared under the title “ec2dns 2.0 - DNS for EC2 instances” on April, 29th 2015 on the fruux blog.

Back in 2012 we released the first version of ec2dns, a set of command line tools that makes it easy to display public hostnames of EC2 instances and ssh into them via their tag name.

EC2 instances have random hostname like ec9-99-99-99-99.compute-1.amazonaws.com. These hostnames are quite hard to remember - that’s where our tool comes into play. All you have to do is to assign Name tags to your servers, e.g. appserver-1.


Previous versions of ec2dns provided wrapper commands like ec2ssh and ec2scp.

Example for the ec2ssh wrapper

$ ec2ssh appserver-1

To make interacting with EC2 instances even easier, we’re going one step further with todays release of ec2dns 2.0, which finally justifies the dns part in the name ec2dns.

Version 2.0 has a built-in DNS server

The wrapper commands are still included, but with the built-in DNS server of ec2dns 2.0 working with EC2 instances is now even better - all you need to know is the Name tag of the instance and add .ec2 to it. Amongst various bugfixes and optimizations ec2dns 2.0 has also been migrated to aws/aws-sdk-php v2.

ec2dns 2.0

Some usage examples

SSH into an instance via its name tag

$ ssh appserver-1.ec2

Copy a file from an EC2 instance onto your machine

$ scp ubuntu@appserver-1.ec2:/etc/nginx/nginx.conf .

Installing ec2dns

There are a few prerequisites, but aside from them setting up ec2dns is done with just one command.

$ composer global require "fruux/ec2dns=~2.0.0"

If you want to use the DNS feature, you’ll just have to make sure that the ec2dns server is running on your system (we have instructions for OS X) and used to resolve anything with the .ec2 TLD.

ec2dns is on GitHub. Check the README.md file if you want to learn more or have a closer look at the installation and update instructions.

Open source at fruux

fruux is powered by a ton of open source software, but we also release a lot of open source software ourselves. ec2dns is really just a tiny example. With sabre/dav and our other sabre/* projects all of our core technology is open source, too and heavily used not just in our own product, but also powers sync features in great products from companies such as Atmail, Box and ownCloud.

I hope that ec2dns (or any of our other open source projects) will be useful for you. Perhaps we’ll even see you as a contributor on GitHub soon. :-)


Any comments? Ping me on Twitter. 👉🏻 Get my newsletter for occasional updates. ✌🏻

EU VAT rules 2015: A mess for startups

This post originally appeared under the title “Why the new 2015 EU VAT rules are actively harmful to startups” on March, 24th 2015 as a guest post at tech.eu.

Seven years ago, in February 2008, the Council of the European Union approved Directive 2008/8/EC to little fanfare. Now, seven years later, this directive is back to haunt European startups on a massive scale with its new VAT (Value Added Tax) rules on the place of supply.

The new rules in a nutshell.

With effect from 1 January, 2015 the place of supply for electronic services (also telecommunication services and broadcasting) provided to consumers is deemed to be the location of the consumer.

Previously the place of supply for e.g. a SaaS (Software as a Service) company providing its services to consumers would -in general- have been the location of the company.

The place of supply is important for tax purposes. It decides where the company is liable for VAT.

Under the old rules, it didn’t matter where a customer was located. A company would always have paid the local VAT rate in its own country, because the place of supply was at the location of the company (for example a german SaaS company would have paid 19% VAT on its sales to its local tax office, regardless if the customer is located in Germany, Italy, Austria or elsewhere in the European Union).

Under the new rules however, because the place of supply is deemed to be at the location of the customer, the company has to pay VAT to the foreign tax authorities (for example a German SaaS company selling to a German customer has to pay 19% VAT to its local tax office, but for its sales to Italian customers it has to pay 22% VAT to the Italian tax authorities, for sales to its Austrian customers 20% VAT to the tax authorities in Austria, and so on).

To comply with these new rules, companies either have to register in each country they are selling to and report their sales to this country or register for the MOSS (Mini One Stop Shop) scheme in their own country and file country by country reports there.


What is the idea behind these rules?

These new rules are designed to prevent so called jurisdiction shopping. Under the old rules, multinational corporations (e.g. Apple) were able benefit from lower VAT rates, simply by deciding where they incorporated their European distribution subsidiaries (e.g. iTunes S.à r.l. in Luxemburg). The new rules prevent this jurisdiction shopping -at least for VAT purposes- as companies are always liable for VAT in the country of their customer and with the VAT rate of that country.

These new rules are harmful to startups.

One might think that companies had seven years to prepare for these new rules, but many aren’t even aware due to the extremely poor communication of tax authorities and governments. But even if companies would be largely aware, startups typically neither have the manpower in-house, nor an army of consultants and tax advisors to take care of the extra burden generated by these new rules – unlike multinational corporations, where even the headcount of the in-house tax department typically surpasses the total headcount of an early stage startup.

A startup is a company designed to grow fast. To achieve this growth, they will often sell internationally right from the start and because of this run into the brick wall of international tax compliancy way before they are prepared to deal with this complexity.

The Mini One Stop Shop doesn’t fix the problem.

With the MOSS scheme companies don’t have to report and register in each and every country they are selling to, but they still have to report their per-country sales to the MOSS in their own country. In Germany the competent authority is e.g. the Federal Central Tax Office.

Admittedly, reporting country-by-country sales to the MOSS instead to each and every country is a lot less complex, but companies are still severely affected by these new VAT rules:

  • Requirement to ask customers for more data during checkout, which negatively affects conversions.
  • Additional work to compile country-by-country reports (and often no easy way to automatically gather this data from payment processors).
  • Development time for invoice handling instead of product features.
  • Additional work for tax advisors resulting in higher costs for companies.
  • More admin overhead with separate MOSS filings in additional to normal tax reporting.

Bottom line: startups are currently forced to burn precious time and money that would be way better invested in building a great product and finding the right customers for it.

How to fix VAT for EU startups?

One easy way to fix this would be to exempt companies with sales below a certain threshold from these rules – similar to the exemption for the supply of goods up to the threshold of 100,000 EUR in Article 34 in Directive 2006/112/EC. Up to this threshold companies would pay VAT on their consumer sales in the country they are located.

A threshold would allow startups to focus on building great products and investing their time and energy in their growth, but also effectively prevent big multinationals from jurisdiction shopping and ensure that VAT yields get distributed fairly between countries within the European Union.


Any comments? Ping me on Twitter. 👉🏻 Get my newsletter for occasional updates. ✌🏻

The perfect AWS ELB SSL Configuration

When setting up an Elastic Load Balancers (ELBs) at Amazon Web Services (AWS) with HTTPS listeners, the predefined SSL configuration (currently ELBSecurityPolicy-2015-02) is usually perfectly sufficient.

With this latest policy, outdated and vulnerable protocols such as SSLv2 and SSLv3 are disabled, server order preference is used and outdated vulnerable SSL ciphers such as RC4 are disabled. In tandem with HTTP Strict Transport Security (HSTS), this is a pretty solid setup for your ELB.

But there is a catch.

If you still have users stuck with Windows XP their computer won’t be able to negotiate any cipher it supports with your load balancer, so all of these users are unable to use your service.


The bad option: Use ELBSecurityPolicy-2014-10

Using ELBSecurityPolicy-2014-10 enables the ECDHE-RSA-RC4-SHA and RC4-SHA ciphers, which are supported by Windows XP. Unfortunately RC4 is vulnerable (e.g. Mozilla and Microsoft as well as many others recommend to disable it where possible) and should be avoided at all cost, both to keep users safe as well as to maintain a good rating in your SSL Labs Report.

The good option: Configure your own Security Policy

If you have to support Windows XP clients, you can’t use ELBSecurityPolicy-2015-02 and you don’t want to use ELBSecurityPolicy-2014-10 because it’s weak. Luckily there’s the DES-CBC3-SHA cipher, which is supported by Windows XP and still considered secure.

To use that, you have to create a Security Policy for your load balancer and then activate it for each listener. This can be done with the AWS Command Line tools in two commands.

Update (March, 18th 2015)

The new AWS provided policy ELBSecurityPolicy-2015-02 also solves this issue:

Create a new Security Policy

To create the security policy (you’ll have to repeat this step for each of your load balancers) run the following command. Make sure to set --load-balancer-name, --policy-name and --region correctly.

aws elb create-load-balancer-policy \
--load-balancer-name myloadbalancer \
--policy-name MySecurityPolicy-2015-03 \
--policy-type-name SSLNegotiationPolicyType \
--region eu-central-1 \
--policy-attributes \
AttributeName=Protocol-TLSv1,AttributeValue=true \
AttributeName=Protocol-TLSv1.1,AttributeValue=true \
AttributeName=Protocol-TLSv1.2,AttributeValue=true \
AttributeName=Server-Defined-Cipher-Order,AttributeValue=true \
AttributeName=ECDHE-ECDSA-AES128-GCM-SHA256,AttributeValue=true \
AttributeName=ECDHE-RSA-AES128-GCM-SHA256,AttributeValue=true \
AttributeName=ECDHE-ECDSA-AES128-SHA256,AttributeValue=true \
AttributeName=ECDHE-RSA-AES128-SHA256,AttributeValue=true \
AttributeName=ECDHE-ECDSA-AES128-SHA,AttributeValue=true \
AttributeName=ECDHE-RSA-AES128-SHA,AttributeValue=true \
AttributeName=DHE-RSA-AES128-SHA,AttributeValue=true \
AttributeName=ECDHE-ECDSA-AES256-GCM-SHA384,AttributeValue=true \
AttributeName=ECDHE-RSA-AES256-GCM-SHA384,AttributeValue=true \
AttributeName=ECDHE-ECDSA-AES256-SHA384,AttributeValue=true \
AttributeName=ECDHE-RSA-AES256-SHA384,AttributeValue=true \
AttributeName=ECDHE-RSA-AES256-SHA,AttributeValue=true \
AttributeName=ECDHE-ECDSA-AES256-SHA,AttributeValue=true \
AttributeName=AES128-GCM-SHA256,AttributeValue=true \
AttributeName=AES128-SHA256,AttributeValue=true \
AttributeName=AES128-SHA,AttributeValue=true \
AttributeName=AES256-GCM-SHA384,AttributeValue=true \
AttributeName=AES256-SHA256,AttributeValue=true \
AttributeName=AES256-SHA,AttributeValue=true \
AttributeName=DHE-DSS-AES128-SHA,AttributeValue=true \
AttributeName=DES-CBC3-SHA,AttributeValue=true

Activate your new Security Policy

Now activate the new policy for all of your HTTPS listeners on your load balancers. Make sure to set --load-balancer-name, --policy-name, --port and --region correctly.

aws elb set-load-balancer-policies-of-listener --load-balancer-name myloadbalancer --load-balancer-port 443 --policy-name MySecurityPolicy-2015-03 --region eu-central-1

That’s it. Now you have a secure SSL configuration for your ELB and still support Windows XP. We use this configuration at fruux (here’s our SSL Labs Report).


Any comments? Ping me on Twitter. 👉🏻 Get my newsletter for occasional updates. ✌🏻

Federated Stellar addresses

A few days ago a nonprofit foundation, partly backed by Stripe, launched a new decentralized payment network (similar to Bitcoin) called Stellar. Unlike with Bitcoins Stellar doesn’t get mined (so maybe the vast amount of computing power of all this mining hardware could get used to sequence DNA or run SETI@home in the future) and Stellar supports transactions in arbitrary currencies via so called gateways. To learn more about Stellar, their launch blog post and Stripe’s blog post is probably a good starting point.

Another interesting feature of Stellar is federation.


What federation is about

Each Stellar user has both a wallet address (similar to Bitcoin) as well as a human readable username for that address (which is quite handy as sending a payment to e.g. watsi is a lot easier than sending one to gN4oHfh4iLRCSsrHVxVHzt6rZXs6EyTCe4). What’s nice about these usernames is that they are actually federated addresses and not just usernames. The username watsi is just the short version of watsi@stellar.org.

Stellars federation protocol makes it possible to set up federated addresses for your domain. That means, that if you control the domain example.org, you could receive payments under e.g. john.doe@example.org. Think about it like some kind of domain name system for wallet addresses. Stripe’s CTO Greg wrote a blog post with some more details about how federation works and also provides a link to his sample federation server there.

How a client figures out the recipients wallet address

Just to give you a rough idea, here’s a quick overview of how a Stellar client figures out the recipients wallet address.

Locate and parse stellar.txt

When you are trying to send Stellar to e.g. john.doe@example.org the client first tries to find a file called stellar.txt for the example.org domain by trying the following URLs in this order:

  • https://stellar.example.org/stellar.txt
  • https://example.org/stellar.txt
  • https://www.example.org/stellar.txt

If the file is found, the client parses it for the URL of the federation server that’s responsible for this domain. For this example the contents of that file might look something like this:

[federation_url]
https://federation-server.example.org

Query federation server

Now the client makes it’s final request and asks the responsible federation server for the wallet address of john.doe@example.org. If everything worked out as expected and the federation server actually knew that user, the client now knows the wallet address for john.doe@example.org and is able to send a payment.

Running your own federation server

To run your own federation server you’ll need a valid SSL certificate for both the server where your stellar.txt file is located as well as for the domain your federation server itself is running on (those two could of course be the same). If you are also curious about Stellar federation you can give my implementation (it’s on GitHub) a try.

Disclaimer

A quick word of warning: I put this together for fun and haven’t spent a lot of time with it, so please use it at your own risk and don’t sue me if you end up sending your life savings to /dev/null. :) The federation protocol itself also still seems to be evolving, so things might break when they change the protocol.

Let me know if you find bugs or have suggestions for improvement.


Any comments? Ping me on Twitter. 👉🏻 Get my newsletter for occasional updates. ✌🏻