In my view, SDN is not a tipping point. SDN is not obsoleting anyone. SDN is a starting point for a new network. It is an opportunity to ask if I threw all the crap in my network in the trash and started over what would we build, how would we architect the network and how would it work? Is there a better way?
All we are seeing today are failures and proofs of varying success to a new network. Drawing conclusions at this stage of development is pointless. We should be intrigued that people are asking: is there a better way?
Could not agree more. In fact, one of the big reasons I came to Juniper.
- Clean sheet approach to the DC network problem
- Software architecture with distributed control plane, but managed as a single entity. (An SDN)
- Hardware architecture that lowers latency & provides the equal-distant performance needed from SOA and turning DCs into computers without relying on the vulgarities of Ethernet as a transport mechanism (an integrated SDN)
- No change in behavior for compute / storage required as external interfaces are all standard.
- JUNOS automation (Netconf, etc) for infrastructure workflow/orchestration & connectivity to Openflowy sorts of stuff
Will QFabric be the cats meow vs and along with the 30 other solutions to DC networking? Time will tell. What’s important is the market is now questioning new decisions, something it hasn’t done in a decade. Game on.
While Cisco might want everyone to think SDN is years away, reality says otherwise. Openflow certainly has some maturing to do, but there are SDN powered products shipping today.
Juniper’s QFabric is a great example of a fully integrated software defined network. Juniper split the data and control planes in it’s routers eons ago. Distributing that control plane and running it on x86 gear, however is the really hard part of solving both the opex and capex challenges of a datacenter network.
Everything else in the datacenter has changed, except for the network. Solving the datacenter network problem for the next decade requires new physical and software architectures. It’s fantastic to see Stacy and you covering this transition with such gusto. Customers have more choices than ever. The explosion of networking start-ups, merchant silicon, and the rise of the network programer means the network industry will likely look very different 10 years from now. Game on.
My comment on Gigaom’s article on Cisco & software defined networks
Solid summary of the state of the network fabric world.
A good chat about QFabric with the Packetpushers crew. Also shameless self promotion on my part.
Includes fairly lengthy comment from me on QFabric architecture, resiliency, and why you might want a QFabric.
Struggling with emerging Datacenter network architectures
Over the last four days an interesting discussion around Openflow and QFabric. Granted it was kicked off by @Bradhedlund who is a knowledgeable guy when it comes to networks, but also works for Cisco as is known to shall we stay incite conversations from time to time.
[I’ll note here that when I originally posted this, I used a semi-derogatory name to describe Brad - which might have been fine had we been sitting around drinking beers, but was obnoxious on all other levels. I wasn’t thinking and it wasn’t right even though he and a bunch of others seemed to take it well. Brad - my apologies.]
This conversation highlights a common misconception of QFabric which is because it is deployed in multiple components, many network folk want to treat all the bits like a network… except it’s not. It’s a switch.
In general when ever you have a question about QFabric, start with - How would a single switch do that today?
Of course, the insightful people in the network community acknowledge the data path, but then go straight to the control plane, congestion management, and management of a intra-datacenter connectivity solution and begin asking lots of questions. Which of course is the right thing to do as at the end of the day resiliency and stability often trump performance.
Updated with tweets from @boobasan and @ccie15672
Brent Draney has been testing the QFabric architecture inside his environment at the Lawrence Berkeley National Laboratory, where he hosts the NERSC Magellan project, a supercomputing facility backed by the Department of Energy. He compared QFabric’s performance to that of his supercomputing environment connected via InfiniBand.
“We put it head-to-head with one of our supercomputers,” Draney said during Juniper’s webcast. “Two of our [supercomputing] benchmarks were exceeded [by QFabric]. Most of [the QFabric] were running at 90% [of InfiniBand]. We’re really excited to see how far we can push that. Ethernet clusters usually get a fraction of this performance.”
Ethernet workloads at the same speed as Infiniband? Giddy-up.