Our early testing showed that high throughput apps are not a good choice for your first Docker deployment, due to the Docker network stack. Release 1.0 of Docker brought reliable host-based networking and that may have changed this situation–we’re investigating that now. Applications where each instance must stay up for a long time are not a good first choice, either.
This has got to make companies in the business of metering and billing start to wonder about how soon AWS will expand into their territory. Consider the ‘specialty’ companies providing at EC2 usage analysis and reporting like Cloudyn, Cloudability, Newvem, and RightScale… to name a few. Historically, AWS does NOT acquire companies with adjunct functionality … it builds it out themselves. (It’s not always a cakewalk being an AWS partner.)
Lately, however, it seems the company has been listening to its users. Earlier this month, the company launched a new Billing Console that gives users a dashboard with colorful, very un-AWS-like graphs to keep track of their cloud computing cost across Amazon’s services.
Today’s update is squarely focused on EC2. As Amazon notes, the new usage reports are meant to give you “insights into your instance usage and your usage patterns, and will provide you with information that you need to optimize your EC2 usage.”
In general, the ability and interest on the part of (cloud) infrastructure service vendors to add metering, reporting and billing capabilities as part of their own offerings is why companies like Talligent (soon to be Akasia) have shifted out of a focus on cloud services and are more interested in selling their metering/billing/reporting services to enterprises.
We think of applications today as an executable sitting on a virtual machine,” said Forrester’s Kindness. “They may talk about two- or three-tiered applications, too, but when you move to the cloud, developers change the way they develop applications. When you move to Amazon Web Services, it’s your responsibility to create an application that can move around when a server goes down. That application executable is being broken up into multiple files on multiple servers, so if one goes down, another can come up and it can keep on going. Think about the way those applications go back and forth now. It changes the way networking is done, whether on the LAN or across the WAN. You have more traffic, but it will be smaller and more dispersed.
Nice summary from Andre Kindness.
Did not account for increased latency after moving to EC2. In the datacenter they had submillisecond access between machines so it was possible to make a 1000 calls to memache for one page load. Not so on EC2. Memcache access times increased 10x to a millisecond which made their old approach unusable. Fix was to batch calls to memcache so a large number of gets are in one request.
This past April, MemSQL spent more than $27,000 on Amazon virtual servers. That’s $324,000 a year. But for just $120,000, the company could buy all the physical servers it needed for the job — and those servers would last for a good three years. The company will add more machines over that time, as testing needs continue to grow, but its server costs won’t come anywhere close to the fees it was paying Amazon.
Frenkiel estimates that, had the company stuck with Amazon, it would have spent about $900,000 over the next three years. But with physical servers, the cost will be closer to $200,000. “The hardware will pay for itself in about four months,” he says.
One of the perks of the Amazon cloud is that you can instantly expand and shrink your pool of machines, paying only for what you need at any given time. That’s a great thing if you’re building a new company — or running a website where traffic ebbs and flows. But MemSQL reached the point where its workload was relatively constant, where it was using pretty much the same number of virtual servers around the clock.
“The public cloud is phenomenal if you really need its elasticity,” Frenkiel says. “But if you don’t — if you do a consistent amount of workload — it’s far, far better to go in-house.”
Time series to analyze graph data for best ML algorithm and JPatanooga’s / Lumberyard project?
In the “Internet of Things” everyone always looks to the hardware, but some of this work is just as, if not more important.
Ice has provided Netflix with insights into our AWS usage and spending. It helps us identify inefficient usage and influences our reservation purchases. It provides our entire product development organization with visibility into how many cloud resource they are using and enables each team to make engineering decisions to better manager usage. We hope that by releasing it as part of NetflixOSS, the rest of the community can realize similar, and even greater, benefits.
Softlayer = higher than *I* thought. YMMV