In the distributed model datacenter, most of the single purpose servers are attached to the network with a small number of simple single segment connections (most often one), and either use local storage or are connected into the SAN fabric as simple consumers of storage. This is not to say that there are no complex interconnected systems today that require close cooperation of the server, storage and network teams. I am just saying that these are a minority of cases and are mostly treated as exceptions rather than the rule.
Here is a very simplistic description of the three silos as they exist in most organizations today:
Server group – typically responsible for the servers, their installation, configuration up to and including the OS, maintenance and administration. Often this group is further divided by the server flavours – e.g., Windows server group, UNIX server group.
Storage – typically responsible for the configuration of storage devices as well as their connectivity to servers through a Fiber Channel Storage Area Network. They often also cover backups and archiving.
Network – typically responsible for the network connectivity layer between the servers, some storage devices and the end user devices (e.g., desktops) and network level security. Most often responsible for both local and wide area network functionality.
With the adoption of virtualization and converged networking concepts this simple picture is becoming much more complex, and the factorization into the three realms is breaking down. Let’s look at some examples of cross functional areas that are becoming commonplace in the datacenter, and the challenges that they bring.
Hypervisor based virtualization extends the network into the server, through the user of virtual switches. This is changing the interface between the server and network teams. The server is no longer just a simple network device. It now requires multiple network interfaces and connectivity to multiple VLANs, often through VLAN trunking, which was normally confined to the network layer only. Inside the servers there are multiple workloads connected to different networks and there is a possibility of network type devices running as virtual appliances inside the environment. This brings a whole set of issues around the security and network connectivity. Discussing the networking issues in implementing virtualization environments can be a whole separate blog entry. All I am pointing out here is that the network and server world have collided and the point of demarcation is no longer clear.
There are some products that are starting to simplify this co-existence. For example, the Nexus 1000v technology extends the network into the virtual environments in a way that is consistent with the networking view providing controlled connectivity. While this addresses some of the security, quality and consistency issues it does not avoid the new symbiosis of server and network teams, as the deployment as well as ongoing management of the Nexus 1000v definitely involves using skills from both teams.
Issues of integration are also arising between the network and the storage teams with the advent of the converged networking concepts. Delivering more and more of the storage over the same physical network as the traditional networking traffic is increasing the need for the network and the storage teams to work closely together on the design and operational aspects of the environment. Two major drivers for this are the Fiber Channel over Ethernet protocol and the 10 Gbps networks (soon to be 40 Gbps). The types of issues range from the network topology to quality of service and access controls. Just on the topology side, best practices always indicated that storage type traffic should be kept to its own layer two domain (most often VLAN). Most of the time that meant separate segments for backups and maybe for block level storage. When we throw in iSCSI, FCoE, NFS, CiFs, backups all into the mix what does that mean for the network topology and complexity. This is further complicated by the fact that some of the storage is clearly server storage (iSCSI, FCoE) and some is clearly end-user (file shares for user directories) but it is often being delivered off the same storage devices that can present storage through a number of different protocols.
Between the storage and the server teams, the storage devices are taking on the roles of file servers, there are issues like data tiering and de-duplication which often have to be addressed consistently from storage and server side. Virtualization is presenting very different workload type for the storage devices, with its own set of backup requirements.
The desktop virtualization and end user delivery of applications from virtual environments presents another merge point for the desktop and server teams, with its own set of challenges of who owns what and where are the right skills to design and operate optimal environments. The point of this article is not to present a full listing of these new interaction points between teams, but rather to draw our attention to this new paradigm in IT where the overlaps between the server, storage and network worlds are increasing to the point that the old organizational models are starting to strain, and in some cases impact the quality of the environment.
The situation is bound to get even more entangled as the cloud layer is introduced on top of the virtualization platforms. It will take cross functional teams that can effectively integrate servers, storage and network to truly create the datacenters of tomorrow.
Tags: cloud computing, IT, IT environments, network, server, storage
by Milos Brkic