When we started building the new Metigy website as part of our recent rebrand, we laid out a couple of requirements, the main one of which was it had to be fast and scalable. This article focuses on the backend of the website and how we delivered the objective very effectively, using Wordpress.
Now, before I continue, a few people will grumble with me about the Wordpress blog system not being the best. It does have it's problems and I've managed to hit a very large number of them. Face first, whilst on fire, tumbling off a cliff being chased by angry cats with knives. But that's another story.
We were aiming to make this site super lean and easy to update via continuous deployment tools like Elastic Beanstalk. Be completely self contained where possible. But we still want leverage the advantages of Wordpress as it's admin tool.
There are many projects out there that try and solve this problem and I looked at them and learnt from their ideas.
What we ended up with a fairly simple stack (obviously this is simplified)
The long and the short of it is that we started by figuring out what we wanted from the database, and writing those queries. It involved a series of joins and group_concats that meant we could run queries in a fraction of a second.
Once we have the data, we run a series of transformations over the results that pre-generates most of what we need for output. This includes the URL - generated using named routes in FastRoute, the author's details, Images including fallbacks if no feature image defined, etc.
Also to make finding things quick given that we only usually access posts using an id, slug or similar, we create and index of the indexes for those in the cache too. This gives us really quick lookups for all common searches.
By doing that everything is ready for the front-end Twig templates and consistent components that can all be re-used. All the Twig has to handle is deciding what the start and end row is and if it needs to show the back and forward buttons.
Here we wrestled with a few different approaches:
We went with the last one because all we need to do was run 3 simple queries to look for changes, then cache the result.
We then compare that to a cached version and for any changes, we prime the cache for that type as highlighted above.
The short answer is not that we can think of. The main hurdle might be when we have to handle very large numbers of articles. At that point we might look to migrate to a local Sqlite or Redis solution for the box.
Search could be a problem, but again there are so many ways around that. And in those instances, we can query the Database directly and cache the search result.
The other potential risk is Wordpress failing. There's a number of ways this could happen but all are minimal risk thanks to the long term file cache and separated Database.
We delivered a website in a couple of weeks that's completely data driven, fast, easy to scale and easy to manage and extend in the future. This is really valuable in creating a key asset in a contemporary sales funnel, because it means you can deliver, test and iterate on design, really rapidly.
In addition, all of the code is written so that it can be extended and news types added easily. And if we want, the the code can be ported to other sites with ease.
Our speed tests all score highly and we've managed to add a few updates to the site really quickly. We're pretty stoked with version 1.o of the Metigy website and are looking forward to rolling out more updates soon.
Greg has a passion for what AI and Deep Learning can bring to the MarTech stack and how small and medium businesses can benefit from these new technologies.
He has over 20 years experience as an engineer and product developer, having worked for significant global marketing agencies, Razorfish and We Are Social.
Sign Up For Free NowORCompare Plans