Cloudflare on the Edge

rw-book-cover

Metadata

Highlights

  • Occasionally, however, disruptive technologies emerge: innovations that result in worse product performance, at least in the near-term. Ironically, in each of the instances studied in this book, it was disruptive technology that precipitated the leading firms’ failure. Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value. Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use. (View Highlight)
  • Cloudflare paired its content delivery network with DDoS protection; the latter was extremely attractive to websites, gave Cloudflare an in with ISPs who valued the protection to build point-of-presence servers around the world, and, critically, gave Cloudflare better-and-better data about how data flowed around the world (improving its service) even as it improved its CDN capabilities. Cloudflare’s focus on security-for-free also meant its CDN was built on general-purpose hardware from the beginning (View Highlight)
  • To achieve the level of efficiency needed to compete with hardware appliances required us to invent a new type of platform. That platform needed to be built on commodity hardware. It needed to be architected so any server in any city that made up Cloudflare’s network could run every one of our services. It also needed the flexibility to move traffic around to serve our highest paying customers from the most performant locations while serving customers who paid us less, or even nothing at all, from wherever there was excess capacity. (View Highlight)
    • Note: Initial days of Cloudfare
  • Cloudflare is about to go through a similar transition [as programmable CPUs]. At its most basic level, Cloudflare is an HTTP cache that runs in 117 locations worldwide (and growing). The HTTP standard defines a fixed feature set for HTTP caches. Cloudflare, of course, does much more, such as providing DNS and SSL, shielding your site against attacks, load balancing across your origin servers, and so much else. But, these are all fixed functions. What if you want to load balance with a custom affinity algorithm? What if standard HTTP caching rules aren’t quite right, and you need some custom logic to boost your cache hit rate? What if you want to write custom WAF rules tailored for your application? You want to write code. We can keep adding features forever, but we’ll never cover every possible use case this way. Instead, we’re making Cloudflare’s edge network programmable. We provide servers in 117+ locations around the world — you decide how to use them. (View Highlight)
    • Note: About Cloudfare workers
  • When using Durable Objects, Cloudflare automatically determines the Cloudflare datacenter that each object will live in, and can transparently migrate objects between locations as needed. Traditional databases and stateful infrastructure usually require you to think about geographical “regions”, so that you can be sure to store data close to where it is used. Thinking about regions can often be an unnatural burden, especially for applications that are not inherently geographical. With Durable Objects, you instead design your storage model to match your application’s logical data model. For example, a document editor would have an object for each document, while a chat app would have an object for each chat. There is no problem creating millions or billions of objects, as each object has minimal overhead. (View Highlight)
    • Note: Durable objects not only stored data for Cloudfare workers, but also maintained state
  • In Cloudflare’s example of a chat app, every individual conversation is an object, and that object is moved as close to the participants as possible; two people chatting in the U.S. would utilize a Durable Object in a U.S. data center, for example, while two in Europe would use one there. There is a bit of additional latency, but less than there might be with a centralized cloud provider (View Highlight)
  • This last point gets at why the cloud and mobile, which are often thought of as two distinct paradigm shifts, are very much connected: the cloud meant applications and data could be accessed from anywhere; mobile made the I/O layer available anywhere. The combination of the two make computing continuous. What is notable is that the current environment appears to be the logical endpoint of all of these changes: from batch-processing to continuous computing, from a terminal in a different room to a phone in your pocket, from a tape drive to data centers all over the globe. In this view the personal computer/on-premises server era was simply a stepping stone between two ends of a clearly defined range. (View Highlight)
  • Since we’re unlikely to make the speed of light any faster, the ability for any developer to write code and have it run across our entire network means we will always have a performance advantage over legacy, centralized computing solutions — even those that run in the “cloud.” If you have to pick an “availability zone” for where to run your application, you’re always going to be at a performance disadvantage to an application built on a platform like Workers that runs everywhere Cloudflare’s network extends. (View Highlight)
  • But let’s be real a second. Only a limited set of applications are sensitive to network latency of a few hundred milliseconds. That’s not to say under the model of a modern major serverless platform network latency doesn’t matter, it’s just that the applications that require that extra performance are niche…People who talk a lot about edge computing quickly start talking about IoT and driverless cars. Embarrassingly, when we first launched the Workers platform, I caught myself doing that all the time. (View Highlight)
  • Most computing resources that run on cloud computing platforms, including serverless platforms, are created by developers who work at companies where compliance is a foundational requirement. And, up until to now, that’s meant ensuring that platforms follow government regulations like GDPR (European privacy guidelines) or have certifications providing that they follow industry regulations such as PCI DSS (required if you accept credit cards), FedRamp (US government procurement requirements), ISO27001 (security risk management), SOC 1/2/3 (Security, Confidentiality, and Availability controls), and many more. But there’s a looming new risk of regulatory requirements that legacy cloud computing solutions are ill-equipped to satisfy. Increasingly, countries are pursuing regulations that ensure that their laws apply to their citizens’ personal data. One way to ensure you’re in compliance with these laws is to store and process data of a country’s citizens entirely within the country’s borders. (View Highlight)