January 2017

Slow API response times for some customers
The issue turned out to be a programming error from a single stores developers effectively launching a DOS attack on api.clerk.io. The issue has been resolved.
Jan 4, 16:20-16:24 CET

December 2016

[Scheduled] Infrastructure Upgrade
Upgrade successfully completed!
Dec 29, 00:00-01:22 CET

November 2016

No incidents reported for this month.

October 2016

.io DNS Issues
The .IO ccTLD team has restored service to the .io TLD, traffic is back to normal and we have still not tracked any issues since 8:34 UTC. Our tech team will look on how to handle top level domain issues in the future. We apologize for the interruption and hope you have a fantastic Friday.
Oct 28, 10:49-12:30 CET

September 2016

No incidents reported for this month.

August 2016

No incidents reported for this month.

July 2016

Networking Issues
We resolved the issue. It was a queue overload letting several API servers to drop connections. Either this was a DOS attack but it's more likely that some external developer by accident flotted our network with 1000s of requests/second. It resolved is self after a few minutes.
Jul 15, 16:19-17:14 CET
Slow API Response Time
This incident has been resolved.
Jul 11, 13:55-22:32 CET

June 2016

We are experiencing some slow API responses
Everything is running smoothly. A bad scheduling of data processing resulted in us exceeding our storage bandwidth resulting in average response times growing from the typical 30-40ms to 1s. This occurred 08:08 UTC, was identified at 08:16 UTC and was finally resolved at 08:20 UTC leading to a total of 12 minutes with slow API response times. This was a one of issue. We have increased our storage bandwith to cope with such corner cases and are working on making our scheduler smarter and handle storage bandwidth as well.
Jun 16, 10:14-10:41 CET
High traffic
Everything is operational again. We hit a spike in traffic and immediately responded by upgrading our infrastructure accordingly. We will go through our infrastructure to take care of such spikes in the future. We apologise for the inconvenience.
Jun 1, 11:35-11:45 CET

May 2016

API Slowdown
We experienced a period of slow API response rate this morning from 9:30 - 9:45 due to a peak traffic on our campaign product where several larger customers executed campaigns simultaneously. The slowdown was due to a bottleneck in our embed servers computing template images for emails. Immediately after identifying the bottleneck we upgraded our compute infrastructure to handle the increased load and everything was operating smoothly again with average API response rates around 50ms.
May 24, 10:07 CET

April 2016

This mornings slowdowns
This morning we have experienced some slowdowns on our service. A Memcached process had crashed during the night so as this mornings traffic build up or database started to get overloaded. The issue is now resolved and our service is back to full speed. We will expand our monitoring to monitor individuel Memcached processes more in-depth to avoid this kind of errors building up in the future.
Apr 20, 09:11 CET

March 2016

[Scheduled] Backend Upgrade
The scheduled maintenance has been completed.
Mar 30, 22:00-22:30 CET

February 2016

[Scheduled] Infrastructure Upgrade
The scheduled maintenance has been completed.
Feb 18, 03:28 CET
Slow responsitime
Issue is fixed. A database process had crashed and slowly lead to a queue in queries and thus the slow performance.
Feb 15, 11:14-11:20 CET

January 2016

DOS Attack
The attack has been blocked.
Jan 28, 10:21-11:09 CET
Api interruptions
This incident has been resolved.
Jan 25, 13:17 - Jan 26, 10:04 CET
Service Migration
Everything is running smoothly.
Jan 3, 17:56 - Jan 4, 01:59 CET

December 2015

London Connectivity Issues - continued
And now we seem to be back. We will continue to monitor the situation. Happy New Year
Dec 31, 16:17-17:01 CET
London Connectivity Issues
It looks like our hosting provider is having the attack under controll. We will keep monitoring the situation and moving our infrastructure away.
Dec 31, 14:06-16:15 CET
Emergency Database Migration
We are back up again! We will do some checks to make sure everything is running smoothly.
Dec 30, 08:06-09:24 CET
London Datacenter Outage
The issue has been resolved and we are back up. We are sincerely sorry for this incident. We strive extremely hard for no downtime at all and take such incidents as this extremely seriously! We have decided to move all our instances from this provider over the next days to remove them entirely from our setup.
Dec 30, 00:08-03:44 CET
Network Connectivity - London
Everything is back to normal.
Dec 17, 13:17-13:42 CET
Possible Service Interruptions
Everything went fine without any interruptions. Or strictly speaking we had 42 seconds in total over 3 periods in a 12 hour window so nothing noticeable to end consumers. Everything is up and running again. Data processing will take some hours to run through the backlog from the day.
Dec 14, 11:48-23:32 CET

November 2015

No incidents reported for this month.

October 2015

[Scheduled] Infrastructure Upgrade
Everything went well and we are running smoothly.
Oct 23, 00:00-00:59 CET
Short Outage
Everything is running smoothly and a new fixed update has been successfully released.
Oct 12, 11:15-12:40 CET

September 2015

API Slowdown
Everything is running smoothly again. Unfortunately one of our new customers had a script that ran wild and effectively performed a DDOS attack on our service causing the slowdown. We are in contact with their IT department to resolve the issue.
Sep 22, 15:23-15:39 CET

August 2015

DDOS Attack
Attack is contained and everything is running smoothly again. We'll keep monitoring the situation.
Aug 17, 12:15-12:29 CET
SSL Certificate Issue
The certificate has now been configured correctly.
Aug 1, 13:08-13:11 CET

July 2015

No incidents reported for this month.

June 2015

API Slowdown
The routing issue has been resolved. We will follow up with a post mortem later.
Jun 3, 13:35-14:05 CET

May 2015

Datacenter Issues
Everything is running smoothly again. There was a short denial of service targeting another server on the same network segment flooding the network capacity. The attack was quickly blocked and everything is up and running.
May 19, 14:03-14:35 CET

April 2015

DDOS Attack
Everything is now running smoothly again. The DDOS attack has been contained but we continue to monitor the situation.
Apr 30, 00:18-00:25 CET

March 2015

[Scheduled] Infrastructure Upgrade
The scheduled maintenance has been completed.
Mar 6, 23:00 - Mar 7, 00:30 CET

February 2015

Minor instability
Due to a database error we have had a few minutes of instability. The issue have now been resolved and a patch has been released to prevent this error in the future. We are truly sorry for the inconvenience. For any further questions please contact support@clerk.io
Feb 9, 15:32 CET

January 2015

Full Outage
Everything is now running smoothly. The issued was caused by too many simultaneous data imports by a single customer causing the servers to overload. We have now released a patch to prevent this. We know how important a part we are of your stores are always deeply sorry when there are technical issues.
Jan 14, 14:43-16:31 CET

December 2014

5 minute API outage
We have had no further issues - everything is running smoothly. The issue was caused by too many batch operations running at the same time causing new incoming request to queue up. We have now added a limit to the number of batch jobs and will be working on a smarter scheduler for batching.
Dec 1, 15:22-18:11 CET

November 2014

Short Service Interuption
Due to a error during a smaller update all services were unavailable between 10:06 to 10:09 UTC today. We immediately rolled back to get the systems up again and are now running smoothly again. We apologise for the inconvenience.
Nov 27, 11:18 CET

October 2014

Periodic slowdowns
The issue has been resolved. It was caused by one damaged process on a API server slowing down that server and thus its response time.
Oct 27, 20:57-21:19 CET

September 2014

No incidents reported for this month.

August 2014

[Scheduled] Infrastructure upgrade
All upgrades are done with only a few seconds of total downtime.
Aug 28, 01:00-03:20 CET
API Downtime
We have now back tracked the incident fully and determined the source. This Saturday night we implemented a extra backup service for emergency backups (a backup of the backups). To keep the system online during the process we tuned some configurations to allow for some extra capacity. When we were done we did a reset of all the configurations... except one, which was still left to high! This resulted in internal processing queues slowly growing to large and finally crashing the system this monday afternoon. Our entire software infrastructure is decomposed into individual services so as soon as we received the alerts from our monitoring systems we were able to identify the crashed service and get everything back online within minutes. To prevent this in the future we will of course keep detailed checklists when doing systems changes. We will also look in to more detailed monitoring for our early warning systems. We apologize for the trouble this has caused.
Aug 11, 13:45-18:27 CET

July 2014

No incidents reported for this month.

June 2014

No incidents reported for this month.

May 2014

Data Sync instability for some powerhosting.dk customers.
powerhosting.dk has informed us they they are still working on the issue on their end and expect a fix within 1-2 weeks. They will give us a more detailed explanation tomorrow afternoon. In the meantime we will do our best to minimize the impact this will have on our powerhosting customers. We will mark this issue as resolved and keep the affected powerhosting.dk customers up to date via email.
May 21, 10:20-14:54 CET

April 2014

DNS issues
Linode have now resolved the issue and all DNS servers are up and running again.
Apr 30, 13:35-14:01 CET
[Scheduled] Server Upgrades
All tests has successfully passed for all systems.
Apr 29, 01:00-02:50 CET

March 2014

DB Connectivity Issues
Everything is now running again. We will continue with a limit to the import for the rest of the weekend.
Mar 28, 15:33-16:42 CET
Database Issues
A full diagnostics has now been run and everything is back to normal.
Mar 27, 09:03-10:53 CET
API Down
MySQL is now accepting new connections again. Everything is up and running.
Mar 27, 07:42-07:49 CET
Minor instability
Due to a faulty server process we had some stability issues with our service between 09:11 and 09:17 UTC. This only affected about 3% of our total traffic. Everything is now up and running again.
Mar 3, 11:04 CET

February 2014

No incidents reported for this month.