Saturday, June 28, 2014

Public CDN auto-updated

Introducing libgrabber!

I am happy to announce that jsDelivr is now in process of becoming fully automated.

libgrabber is an app that runs on our servers and can keep projects hosted at jsDelivr updated automatically without requiring the author or any contributors to manually process and upload the new versions.
This is very important as the repo is quite heavy and require people to download it is a bad idea.
Now our bot can take care of all the dirty work.

And it requires no modifications in the author's repo. All is needed is a valid update.json file in jsDelivr's repo inside the project that we want auto-updated.

Sources that libgrabber supports:
  • Github - Tagged versions
  • npm
  • bower
Here is an example update.json file

  "packageManager": "github",
  "name": "morris.js",
  "repo": "morrisjs/morris.js",
  "files": {
    "include": ["morris.css", "morris.js", "morris.min.js"]

That's it, morris.js is now auto-updated and all future versions will be submitted automatically by our bot. The config also allows the user to configure in detail the production files by using a mix of include and exclude rules.

Here is libgrabber in action updating the project domtastic to 0.7.4.

We plan to expand libgrabber in the feature to achieve an even greater level of automation.
In the meantime you can help us develop the project or add update.json files to projects that dont have any.

Tuesday, March 18, 2014

CloudFlare joins jsDelivr

So far jsDelivr was using MaxCDN and CDN.NET to provide people with unmatched performance and uptime. Unfortunately CDN.NET decided to back out of open source sponsorship to focus on other important work they have going.
I am sad to see them leave and want to say a huge thank you for everything they did for jsDelivr!
They are really awesome guys, and one of the few companies that help and sponsor open source projects.
So in the last few weeks CDN.NET was be removed from load balancing. This change was  completely transparent and didn't result in any downtime whatsoever.
Smart load balancing for the win!

Now the good news are that CloudFlare agreed to sponsor jsDelivr and is now live!

MaxCDN and CloudFlare from now on become the back-bone of the infrastructure and together will be serving all of jsDelivr's traffic. Of course with some help from custom locations sponsored by hosting providers.

These 2 companies are the few that truly understand how important is the open source community and how companies should always encourage and support all open source projects. No matter how small they may look. I had almost nothing when I asked for support from MaxCDN and here we are now...
I am very proud to have 2 of the best Web Performance companies out there to forget their competition and put aside all differences and simply support an open source project that helps the web to become faster.

As always jsDelivr retains its smart routing and users will be getting the faster available provider each time they request a file from jsDelivr CDN. The load balancing algorithm gets improved almost every day and I work closely with Cedexis to optimize everything we can and apply the latest improvements they announce.

No changes were made in the core infrastructure so as always even if CloudFlare goes down jsDelivr will continue to work by switching all traffic to remaining providers.

No changes in command chain too. Neither MaxCDN or CloudFlare own the project or influence it in any negative way. It is a common misunderstanding that I noticed lately. The project is fully managed by me and the community.

And if anyone wants to contribute or help in any way just visit our Github repo

Thank you

Tuesday, December 24, 2013

jsDelivr weekly news #2

Latest changelog:

  • A new load balancing algorithm was pushed to production. Its a lot smarter and makes even heavier use of RUM performance data for each provider. The whole logic was revisited and now standalone servers are no longer used by default in all cases. All sponsored CDNs from now on have higher priority and are used by default. Standalone servers are only used for individual countries. For example Finland has a rule that takes into consideration the performance for 2 additional sponsored servers on top of our CDNs. This way the speed of jsDelivr becomes even better with a lot more stability. You are encouraged to report performance issues if you experience any problems.
  • An organization account was created for jsDelivr. All jsDelivr related projects will be created under the same account
  • A CloudFlare App is under development. If anyone is interested to help please let me know. JavaScript knowledge is required.
  • It is planned to open source the website. But before that someone will have to rewrite the back-end code from scratch. At the moment most probably I will have to hire a freelancer to do that. Again if anyone is interested to help let me know
  • All .zip generation was moved to the origin. They are no longer hosted in our Github repo
  • It is now possible to load the latest version of any project by using a special URL.
  • An application to automatically fetch and upload new versions of projects is under development
  • A lot of new features are planned to be released.
  • qTip2 switched to jsDelivr as their official CDN provider!

As always you can participate and contribute to the project. Coders, designers, companies... We need everyone.
We are also interested to include even more CDNs into jsDelivr. If you know people in the CDN industry then let them know that they can help the open source community by simply sponsoring a few TBs of traffic per month.
If MaxCDN and CDN.NET could see past their differences and competition and partner up to help an open source project then anyone can.

Also in the last few months some of our providers experienced downtime. But I am pleased to say that our load balancing algorithm worked perfectly and jsDelivr uptime remained 100% without  anyone being affected.

Saturday, November 9, 2013

How jsDelivr works

I wrote a detailed article on jsDelivr and how exactly it works.

Check it out and let me know if you like it.

Load balancing multiple CDNs or how jsDelivr works

Friday, June 28, 2013

jsDelivr weekly news

Here is the new changelog:

  • Origin moved from MaxCDN push to Rackspace Cloud Files.  MaxCDN locations are now used via a Pull zone. This was done to finally allow me to enable custom HTTP headers. All CORS issues were fixed and I have now full control over HTTP headers.
  • Thanks to Cedexis a new more efficient load balancing algorithm was deployed. A bigger focus was made on uptime and failover features. If a provider goes down the switch should be almost immediate with no downtime at all.
  • A new custom location was added in Germany with 1Gbps unlimited bandwidth connection sponsored by
  • A new project called now collects uptime and performance data for jsDelivr. Make sure to check it out.
  • 2-way authentication was enabled on all jsDelivr related accounts for maximum security. (all that support this feature.)
  • alohaeditor package was removed from github because it was outdated and had more than 20000 files, most of them useless. Now the jsDelivr repo should be easier to fork and create new pull requests. No files were deleted from the CDN itself, so there will be no downtime for users that were using any of those files.

Help jsDelivr with valuable data

It doesn't matter if you are using our CDN or not, you can now help us gather valuable data and make our performance and uptime decisions even more accurate.
All you have to do is to add this script in your website just before </body>

What it does:
  • Loads in non-blocking async mode
  • After the page is done loading in the background without disturbing the user or the website in any way starts pinging jsDelivr providers.
  • It then sends back to us very useful availability and performance data.
  • Based on this data our load balancing algorithm then decides what provider is best suited for each user of jsDelivr CDN. The more users your website gets the more accurate data we will have.
  • This is very important. Currently its gets about 190k measurements per hour. Pretty good but most of those users are USA based, which means that users from other countries, like China and India could experience a slightly inaccurate decisions.
  • Please feel free to add this code in all of your blogs and websites. It helps everybody!
That's it for now, I will update this blog with more jsDelivr information as they come in.

Sunday, June 2, 2013

jsDelivr News: New infrastructure and more

It's been a while since I posted news regarding jsDelivr.
Let's see what happened since then:

  • jsDelivr moved to a new infrastructure!
    • It now utilizes a Multi-CDN system using Cedexis load balancing.
    • NetDNA and CDN.NET are the main CDN providers. All traffic is load balanced between them.
    • The system checks both Uptime and Performance before deciding which CDN should serve the user. This algorithm guarantees a super reliable and fast network.
    • Tested in the wild. One of the CDN providers had Uptime issues (10'), jsDelivr to this day has 100% Uptime.  
  • Custom POPs are planned to be added in the next few days.  
    • A few hosting providers have sponsored jsDelivr with VPS nodes in locations where there is no CDN presence. (If you are a provider and want to sponsor a free VPS please mail me
    • Traffic will be load balanced based on the same algorithm, Uptime+Performance.
    • Nginx+SSL will be installed to serve the content. A Caching Proxy configuration is going to be used to make sure the system is 100% malware free.
  • During the next month its planned to switch to Akamai hosted DNS from AWS Route 53.
  • Pingdom published a small interview with me and a friend of mine. jsDelivr-jster interview
  • More than 600 projects are currently hosted!

Uptime and performance are very important to me and this is why I do anything I can to improve jsDelivr even further. It is constantly under development and brainstorm for new ideas. I want to make it one of the most popular public CDNs out there.
I think it's safe to say that currently jsDelivr is the most advanced free public CDN out there. The only thing it lacks is some serious promotion. I hope people eventually will see that it deserves their trust and will start using it.


Don't forget to follow me @jimaek and stay updated.

Wednesday, April 10, 2013

How to handle time consuming PHP scripts

Some times you might have to write PHP scripts that can take a long time to finish.
For example create/recover backups, install CMS demos, parse large amounts of data etc...
To make sure your scripts will work as expected you will need to know and remember a few things.


The first thing you have to do is to set the max_execution_time parameter in your PHP configuration.
If the script is executed by the web server (i.e in response of a HTTP request from a user) then you will also have to set the correct timeout parameters in the web server config.
For Apache its TimeOut and FastCgiServer… -idle-timeout (if you are using fastcgi)
For Nginx its send_timeout and fastcgi_read_timeout (if you are using fastcgi)

The web server can also proxy requests to an other web server that will in his turn execute the PHP script (e.g nginx - frontend, apache - backend). In this case you will have to set the correct timeout parameters for the proxy.
For Apache its ProxyTimeout and for Nginx proxy_read_timeout.

User Interruption 

If the script is executed in response to a HTTP request, then the user can easily interrupt the script by canceling the request in his browser.
If you want for the PHP script to continue running even after the user canceled/interrupted his request then set TRUE in ignore_user_abort parameter of PHP.

Loss of Open Connection

If the script opens a connection with any other service (DB, mail, ftp..) and the connection happens to close/timeout then the script will fail to execute correctly.
For example if during the execution of the script we open a connection to MySQL and for some time don't use it, then MySQL will close the connection based on the wait_timeout parameter.
In this case the first thing should be to try and increase the connection timeout. For example for MySQL we can run the following query:
SET SESSION wait_timeout = 9999

But if we don't have this opportunity then we can check if the connection is open in parts of code where the connection could timeout and re-connect if needed.
For example in the mysqli module there is a useful function called mysqli::ping to check the status of our connection. Plus there is the configuration parameter mysqli.reconnect for automatic re-connection if the connection closes.
If you are trying to do the same thing for an other kind of service and there is no similar configuration or function available then you could try to write it yourself.
The function should try to connect to your service and in case of error (you can use try and catch) re-connect. For example:

class FtpConnection
    private $ftp;

    public function connect()
        $this->ftp = ftp_connect('ftp.server');

    public function reconnect()
            if (!ftp_pwd($this->ftp))


class MssqlConnection
    private $db;

    public function connect()
        $this->db = mssql_connect('mssql.server');
    public function reconnect()
            if (!mssql_query('SELECT 1 FROM dual', $this->db))


Parallel Execution

Its not rare when long scripts are executed on schedule (cron) and its expected that at any moment only one copy of the script will be running.
But it can happen that the script will be executed on schedule while the old one is still running. This will lead to unpredictable problems and errors.
In this case you should use a locking technique of the resourses being used, but this task is always solved individually. You can check if there is an other copy running and either wait for it to finish or cancel the current script execution. To achieve this you can check the list of running processes or lock the execution of the script itself, something like:

if (lockStart('script.php'))
    // main php code

Web Server Load

In cases when the long scripts are executed by the web server, the client connection with the web server stays open until the script finishes. This is not a good thing since the web server's task is to execute the request as fast as possible and give the result. But if the connection stays open then one of the web server workers(process) will be busy for a long time. And if a high amount of these scripts will be executed at the same time then all the workers (apache MaxClients) will be busy executing them thus the server will simply stop responding to any new requests, resulting in downtime.

That's why when processing a user's request you should execute the script in background using php-cli to keep the server load as low as possible. If needed you can use AJAX to check the status of the script exection.

Feel free to comment and share the article. Also don't forget to check out and use jsDelivr.