German LegalTech Overview

This post originally appeared under the title “The rise of ‘LegalTech’: a German market overview” on June, 24th 2016 as a guest post at The latest version of the LegalTech landscape has its own page.

Everyone in tech has heard about banks getting disrupted by - or more recently trying to partner with - startups in the FinTech space. A somewhat similar, although still much less visible, development is happening in the legal market - another highly regulated industry. Meet LegalTech.

Until very recently “tech” seemed to be something evil and awful that most lawyers wanted to beat with a stick. But very recently, at the 67th Conference of German Lawyers in Berlin, the so-called Anwaltstag, something unexpected happened: it was decided that the conference next year will take place under the motto “LegalTech and Innovation”. This year’s motto wasn’t about tech, but there was a panel discussion with the title “LegalTech versus Lawyers?”.

Looking at the title, especially the word versus and the question mark, this was maybe a little prelude for next year. There was heated discussion and some skepticism. Among others, lawyers were wondering if LegalTech might lower quality compared to tailor-made legal advice, how clients would still build trust in their lawyers if the personal interaction between them is partly replaced with tech, if tech-driven legal advice will negatively affect their compensations, and if the brand-building and valuable client relationships might shift from law firms to LegalTech companies.

But there also seemed to be genuine excitement from a few lawyers. If you look at how tech is currently used in many small and medium-sized law firms, that already is progress. Many firms still do not have computers on the desks of their professionals, case files are not digitized, and lawyers carry huge stacks of paper to court, rocking their fax machines like back in the Nineties, and instead of typing their legal prose into a text processor themselves -or at least using speech recognition-, they record audio cassettes and have staff to type it out - something that virtually every other industry has gotten rid of.

More often than not, these drafts are circled around for a few rounds if there are corrections - which are often also recorded onto an audio cassette and then typed again. If the lawyer is lucky then at some point there is a version that they sign and that can be sent with one of those fancy fax machines or via snail mail. Naturally nothing just magically happens in a law firm - if lawyers are finally happy with a draft and want to sign and send it, they have to decree what should happen next, which is a fancy way to say “Add a piece of paper for their assistants” which says “print the final version and two copies, bring them back to me for signature, put them into an envelope and mail them to the court, then put the whole file into a folder and put it back onto my desk for review in 4 weeks”. If this sounds ineffective and like a lot of wasted time to you, that’s because it is.

So what is LegalTech and why does it matter?

Legal Technology, also known as LegalTech, refers to technology and software that makes lawyers more efficient. But LegalTech does not stop there. Traditional law firms are confronted with an ever-growing number of startups working on specialized legal products, that are directly competing with law firms.

Just like traditional banks are under attack by FinTech startups that are unbundling banking, the same development is happening in the legal industry. LegalTech is still in its infancy and will evolve. It will also without doubt change the legal market forever. This is my attempt to categorize the current German LegalTech landscape:

German LegalTech Overview

The latest version of the LegalTech landscape has its own page.

A core motive is automating things that can be automated. So at the core of LegalTech are attempts to provide technology-based and standardized legal advice products for well-understood problems occurring in great volume. Instead of lawyers providing expensive hand-crafted legal advice, clients almost exclusively interact with software, handling the case intake, managing the whole process, keeping the client informed and generating the necessary documents. If anything, lawyers take care of handling exceptions and signing documents.

Such standardized legal advice products are offered for claims for delayed or canceled flights (,,,,,,, claims for delayed or canceled train rides (,, mortgage termination (,, challenging speeding tickets (, contract termination (,, going after Volkswagen ( for Dieselgate compensation, challenging your social security grant and many other situations (, Similar services will emerge in other areas, e.g. defending against getting fired, filing for insolvency, filing for divorce, terminating rental contracts, and vacating properties.

Tech companies offering such services, but also businesses in general with regular need for legal advice, will have need for LaaS (Lawyers-as-a-Service). Such legal process outsourcing, or LPO for short, is offered both in Lawyer-to-Lawyer (L2L) as well as Lawyer-to-Business (L2B) flavors and sometimes less (,, and sometimes more technology driven (, As companies keep pushing at optimizing their internal processes, LPO will most probably get more and more technology-driven. We’ll also see an “API-fication” of these services, which will enable to directly integrate them into other applications and automate processes that require legal advice.

Another exciting category is AI and eDiscovery. Companies in this space are trying to reduce the workload for lawyers when working with massive amounts of data and documents. Examples are products that automatically analyze contracts to allow risk assessments without having lawyers read through thousands of pages ( and products that help lawyers to structure documents and create links between them (,, Larger law firms are also conducting their own experiments in this category (e.g IBM Watson at Baker & McKenzie).

Gone are the days when law firms only worked cases in their geographic vicinity; that is where platforms come in that allow lawyers to find other lawyers to send to their court appointments ( With higher workloads and more competition, optimizing processes inside a law firm is also critically important, e.g. with easy billing and reporting software ( or effective hiring of legal talent (,, Most communication with clients still happens face to face, via phone, fax or email – so WebRTC-based video chat including features to exchange documents via the browser is a somewhat fresh approach ( An applicable label for this category is maybe tools, because products in this category are used by lawyers and inside law firms.

Somewhat tangential is legal practice management (LPM) software. LPM solutions are the central hub of a law firm where case management, client management, billing, reporting, assigning tasks to staff, and potentially in the future also integration of other legal technology will happen. Current legal practice management solutions are still somewhat dated desktop applications – at least in Germany the shift to cloud-based solutions has yet to come. LPM solutions are also primarily used by staff today, but as these tools get more powerful and deal more with actual casework, future lawyers will start interacting much more with these applications themselves.

Still the most important tool for lawyers today though and among the first to go digital is the legal content itself. Legal databases offer the various legal texts (,,, regulations (, case law (,,, as well as a combination of those including legal handbooks and secondary literature (,, The shift towards open data is also in full swing in the legal industry, so more and more of this content will be freely available in the future, lowering the barriers to building services and applications around this content, but also affecting revenue streams of the established providers.

Sometimes all that is needed is a good contract. Companies in the smart contracts category are trying to cover this with varying levels of smartness (,,,, For well-defined problems, smart contracts are a promising alternative, but as with technology-based and standardized legal advice products in the foreseeable future lawyers will still have plenty of work whenever things are a little more complex.

Somewhat similar to the previously mentioned legal process outsourcing solutions aimed at lawyers (L2L) and businesses (L2B) there are also Lawyer-to-Consumer (L2C) companies. These consumer legal advice products range from generic question and answer portals that connect consumers and professionals from various areas (, over specific legal question and answer portals (,,,, to companies offering legal advice packages for defined problems at a fixed price (,

And lastly there are discovery and rating portals to help consumers to find lawyers with certain specializations (,,,, or rate professionals (, including lawyers that they have worked with and legal content portals both aimed at consumers ( and people with a legal background (

There are exciting times ahead, both for lawyers and tech folks diving into LegalTech. I for one am looking forward to our lawyer-automation overlords. ;-)

Did I miss any german LegalTech companies? Disagree with the categorization? Should there be additional categories? I’d <3 to hear from you.

Any comments? Ping me on Twitter.

Apache, MySQL & PHP on OS X El Capitan

OS X 10.11 ships with both a recent version of Apache (2.4.x), as well as PHP (5.5.x), so you’ll just have to install MySQL and go through a few steps to get everything up and running.


First, you have to create a web root in your user account:

mkdir ~/Sites

Then add a configuration for your user:

sudo tee /etc/apache2/users/$USER.conf <<EOF
<Directory "$HOME/Sites/">
    Options Indexes MultiViews FollowSymLinks
    AllowOverride All
    Require all granted

Now we have to make sure that our user config above actually gets loaded:

sudo tee -a /etc/apache2/other/$USER-settings.conf <<EOF
Include /private/etc/apache2/users/*.conf

If you want to use vhosts, you’ll also have to make sure that the vhosts config gets loaded:

sudo tee -a /etc/apache2/other/$USER-settings.conf <<EOF
Include /private/etc/apache2/extra/httpd-vhosts.conf

After that, configure vhosts as necessary in /etc/apache2/extra/httpd-vhosts.conf (don’t forget to remove the examples in there).

It seems that mod_rewrite no longer gets loaded by default, so we’ll also add that to our config:

sudo tee -a /etc/apache2/other/$USER-settings.conf <<EOF
LoadModule rewrite_module libexec/apache2/


PHP doesn’t get loaded by default. So we’ll also add it to our config:

sudo tee -a /etc/apache2/other/$USER-settings.conf <<EOF
LoadModule php5_module libexec/apache2/

You should also configure a few settings in /etc/php.ini:

sudo tee -a /etc/php.ini <<EOF
date.timezone = "`sudo systemsetup -gettimezone | awk '{print $3}'`"
display_errors = on
error_reporting = -1

To activate these settings you have to restart Apache:

sudo apachectl restart

If you also need PEAR/PECL, follow these instructions.


MySQL is not shipped with OS X, so we’ll have to install that manually. Instead of going for an installer package, we’ll use Homebrew. Once Homebrew is installed, installing MySQL is as simple as:

brew install mysql

If you want to start MySQL automatically, the Caveats section from the output of the following command will show you how:

brew info mysql

Any comments? Ping me on Twitter.

PEAR on OS X 10.11 El Capitan

Like previous versions of OS X, 10.11 also ships with PEAR, but unlike previous version (e.g. OS X 10.10 and OS X 10.9) it’s currently not possible to install it, due to permission issues. These issues seems to be caused by the new System Integrity Protection/rootless feature of El Capitan and are a known issue at Apple (rdar://problem/22294777).

I’ll update this post if I come across any new information about this.

Any comments? Ping me on Twitter.

AWS RDS MySQL migration with replication

Amazon Web Services makes it quite easy to replicate MySQL instances running on their RDS service - but sometimes, you’ll still have to do things manually.

Migrating an unencrypted RDS instance into one that uses encryption (or vice versa) is such a case, because you (currently) cannot use an unencrypted snapshot to create a new encrypted instance and you can’t create an encrypted read replica of an unencrypted database.

Migrating an instance into the Frankfurt (eu-central-1) region is another example, where RDS currently won’t help you as you (currently) can neither copy snapshots into the Frankfurt region, nor create a cross-region read replica there (As of 2015-09-21 cross region read-replicas are supported in eu-central-1).

This post explains how to migrate using MySQL replication.

Disclaimer: This post is still a work in progress, I published it regardless to force myself to actually finish and polish it. So until you see this warning here, you should take everything with a grain of salt… well, you should do that anyway with everything you hear or read. ;-)


We want to migrate from a source RDS instance to our target instance. You should definitely practice this process if you are trying to migrate a production database without downtime.

To do this, we’ll have to go through the following steps:

  1. Create a read replica of the source instance (in the same region as the source instance).
  2. Use the read replica to extract a full dump of the source database (without putting heavy load on the source instance).
  3. Spin up a new RDS instance as the target instance.
  4. Import the dump from the source database.
  5. Set up manual replication between the new target instance and the source instance.
  6. Wait for replication to catch up.

Once replication finally catched up (and you’ve reconfigured your application to use the new instance), we’ll do some cleanup and get rid of the read replica we’ve created in step 1. as well as the source instance.

1. Create a read replica of your source database

You can do that via the aws cli or the web console.

2. Extract a database dump from the read replica

First, you’ll have to stop replication on the read replica by running the following SQL statement:

CALL mysql.rds_stop_replication;

Now we’ll have to figure out at which position we’ll later have to continue replication, once we’ve imported the dump again with the following SQL statement:


Write the following values down, you will need them later:

  • Master_Log_File
  • Read_Master_Log_Pos

Next, prepare the database dump:

echo "SET autocommit=0;
SET unique_checks=0;
SET foreign_key_checks=0;" > source_dump.sql

Then create the actual database dump with mysqldump (it’s highly recommended to run your dump in a screen session; depending on your data, you may have to set --max_allowed_packet accordingly):

mysqldump -h -u username -p password -P 3306 --quick --databases db1 db2 db3 --routines --triggers >> source_dump.sql; echo "COMMIT;" >> source_dump.sql

Creating the dump will take quite a while. Good night and see you tomorrow. ;-)

3. Start a new RDS instance as your target instance

You can do that via the aws cli or the web console.

4. Import your dump into the new target instance

To speed up the import a bit, you should set innodb_flush_log_at_trx_commit = 2 for the target instance.

Once that is done, it is time for importing the dump into the new instance (it’s highly recommended to run your import in a screen session; depending on your data, you may have to set --max_allowed_packet accordingly):

mysql -h -u username -p password < source_dump.sql

This will again take quite a while.

5. Set up manual replication between the new target instance and the source instance.

To set up replication, you’ll first have to create a replication user on the source instance:

GRANT REPLICATION SLAVE on *.* to 'username'@'%' identified by 'password';

Then head over to the target instance and configure replication (you’ll now need the values for Master_Log_File and Read_Master_Log_Pos that you wrote down earlier):

CALL mysql.rds_set_external_master (

Now, we’ll start replication by running the following SQL statement on the target instance:

CALL mysql.rds_start_replication;

You can check if replication is actually running with SHOW SLAVE STATUS; on the target instance (slave_io_running and slave_sql_running should both be YES).

6. Wait for replication to catch up.

Once replication catched up, you should set innodb_flush_log_at_trx_commit back to 1 again.

You’ll probably also want to migrate your db users from the source instance. You can get a listing of them on the source instance with:

SELECT user, host FROM user;

Next run the following statement for every user you want to migrate on the source instance:

SHOW GRANTS FOR 'username'@'%';

Then run the output of those queries on the target instance to create the users there and run FLUSH PRIVILEGES; after that.

Final steps and clean up

  • Now you should set your application into read-only mode or take it down for a little while (e.g. if you use auto_increment you don’t want duplicates because you already had writes into the new target instance, but are still replicating writes from the source instance with the same record id).
  • Configure your application to use the new target instance.
  • Make sure that replication fully catched up with SHOW SLAVE STATUS; on the target instance.
  • Stop replication by calling CALL mysql.rds_stop_replication; on the target instance.
  • Remove the replication configuration by calling CALL mysql.rds_reset_external_master; on the target instance.

After you’ve verified that everything is as it should be:

  • Bring your application up again or disable it’s read-only mode.
  • Terminate the read replica you used for dumping.
  • Terminate the source instance.

Disclaimer: This post is still a work in progress, I published it regardless to force myself to actually finish and polish it. So until you see this warning here, you should take everything with a grain of salt… well, you should do that anyway with everything you hear or read. ;-)

Any comments? Ping me on Twitter.

ec2dns 2.0: DNS for EC2 instances

This post originally appeared under the title “ec2dns 2.0 - DNS for EC2 instances” on April, 29th 2015 on the fruux blog.

Back in 2012 we released the first version of ec2dns, a set of command line tools that makes it easy to display public hostnames of EC2 instances and ssh into them via their tag name.

EC2 instances have random hostname like These hostnames are quite hard to remember - that’s where our tool comes into play. All you have to do is to assign Name tags to your servers, e.g. appserver-1.

Previous versions of ec2dns provided wrapper commands like ec2ssh and ec2scp.

Example for the ec2ssh wrapper

$ ec2ssh appserver-1

To make interacting with EC2 instances even easier, we’re going one step further with todays release of ec2dns 2.0, which finally justifies the dns part in the name ec2dns.

Version 2.0 has a built-in DNS server

The wrapper commands are still included, but with the built-in DNS server of ec2dns 2.0 working with EC2 instances is now even better - all you need to know is the Name tag of the instance and add .ec2 to it. Amongst various bugfixes and optimizations ec2dns 2.0 has also been migrated to aws/aws-sdk-php v2.

ec2dns 2.0

Some usage examples

SSH into an instance via its name tag

$ ssh appserver-1.ec2

Copy a file from an EC2 instance onto your machine

$ scp ubuntu@appserver-1.ec2:/etc/nginx/nginx.conf .

Installing ec2dns

There are a few prerequisites, but aside from them setting up ec2dns is done with just one command.

$ composer global require "fruux/ec2dns=~2.0.0"

If you want to use the DNS feature, you’ll just have to make sure that the ec2dns server is running on your system (we have instructions for OS X) and used to resolve anything with the .ec2 TLD.

ec2dns is on GitHub. Check the file if you want to learn more or have a closer look at the installation and update instructions.

Open source at fruux

fruux is powered by a ton of open source software, but we also release a lot of open source software ourselves. ec2dns is really just a tiny example. With sabre/dav and our other sabre/* projects all of our core technology is open source, too and heavily used not just in our own product, but also powers sync features in great products from companies such as Atmail, Box and ownCloud.

I hope that ec2dns (or any of our other open source projects) will be useful for you. Perhaps we’ll even see you as a contributor on GitHub soon. :-)

Any comments? Ping me on Twitter.