: Follow us
Blog Ocean
  • Home
  • Blog
    • Technology
    • Women >
      • Contest
    • Fiction
    • Personal / Daily Life
    • News and Media
    • Sports
    • Religion
  • Guestbook
  • Contact

At What Level Should Data Center Security Be Set?

4/10/2014

0 Comments

 
There is a rather serious concern for IT Operations team and security team within an Organization, about setting security standards. The operations need an appropriate level of access to carry out their routine administrative tasks. The security teams resist providing the grants that can become a threat at any given point of time. It is hard to set a demarcating line which can separate the safe zone for applications which can be freely accessed. All the Organizational data is sensitive in a manner, all is confidential. Is there a way to resolve this conflict?

The data centers today have several choices, with the advantages and disadvantages of their own. Here is a summary of what each option can provide, and where it fails:

Group Logon IDs:

Most of the conventional systems have this set-up in place. A single logon ID is shared by an operational team, with the grants accessed to certain levels. The activities performed by this ID can be viewed, monitored and controlled easily. However, there is a limitation for the members to play around with the system, owing to the restricted access. This works well for sensitive data handling, but fails where a deeper understanding of the operating system is required.

Firecall IDs:

These IDs are similar to the Group logon IDs, but with more granularity. It is a common practice to grant Firecall ID to the team members who need an exceptional level of system access. There is more flexibility as well, associated to these IDs. While the logon can be performed by a single identification, there can be two Firecall IDs existing further to perform two different tasks. This offers various levels of security at different granular levels.

Defects Of Group Logon and Firecall IDs:

Both the set-ups have similar defects. While the auditing is easier and can be continually monitored, it is impossible to identify who accessed the system with the particular ID, if the situation demands so. The personal identification is lost and the root cause analysis gets tougher with this process.

Another major disadvantage with both of these IDs is when the emergency arises. Imagine a situation when the operational team is all ready to present a business report, and the group logon ID fails to work! The single ID allowing the access to the system when changed or forgotten all of a sudden may create havoc during run-time operations. The only resort leads the team to wait for the administrator to revoke the login.

With Firecall IDs, there is an additional requirement of approvals. If this involves too much documentation as per the company’s norms, the normal process gets delayed and can be utterly frustrating.

Alternate IDs:

With the specific monitoring grants provided in the form of normal IDs, if the security teams need to restrict any changes or updates to the existing system, there arises the concept of alternate IDs. The authoritative control is given to the alternate IDs, which can be used whenever necessary. This is an easier and more flexible manner to control any changes to the system, which is considered stable.

Alternate ID's provide more flexibility than firecall IDs or group IDs. They also grant faster access to update authority; however, they are harder to control.

With the efficiency of Alternate IDs understood, and the flexibility of the normal IDs used to the best possible extent, the data centers may adopt a few processes to ensure a seamless working. This bridges the concerning gap between security and operational teams, by granting needed security all the time to the right individuals, and also keeping the sensitive data under check.

The Recommended Processes:

  • Most of the application and system programmers are required to monitor the run-time environment. The ‘read’ access to them is all that is demanded at the basic levels of production tasks. This poses no security threat, and allows continuous monitoring as well. 
  • The software used to control Firecall and Group Logon IDs must have a mechanism that intimates the administrator as soon as the login fails. This ensures that the admin stays aware of situation and helps in logging into the system with a new identification, created with no delay.
  • There must be the highest level of authority that is provided with the complete system access. By ‘complete’, it means all levels of access to the system that can take control when the environment goes ‘bad’. This is very likely when the bigger Organizations work on overloaded servers too often.
  • The technical support team must be contacted as soon as the situation seems to slip out of control. At the least, this team can guide on how to preserve the data that is not yet lost.

0 Comments

Back To Basics Of VDI

3/10/2014

0 Comments

 
The VDI is being the topic of the current technological market, wherever you go. But do you know what VDI is and what makes it the choice of the current computing industry? Let us understand.

Transition from Terminal Server To VDI Architecture:

 Earlier, the Microsoft’s Remote Desktop Protocol in existence worked on connecting the client computer to the host server, which enabled the user to operate the host remotely. This was the Remote Desktop Service System, where one user operated at a time to get connected to the host. The host was called the ‘Terminal Server’ (TS) or the ‘Remote Desktop Server’ (RDS).

This system suffered from some serious privacy issues, and the multi user compatibility. This gave rise to Virtualization System, which divides the server host into various virtual machines (VMs), with an individual virtual desktop of their own. The multiple users get connected to the individual VMs. This architecture was referred to as ‘Virtual Desktop Infrastructure’ (VDI). The Desktop Virtualization can be associated with two types of end-point devices:

Thin Client:

Designed for VDI Architecture, these were the end point devices which typically ran on some specific Operating System. The most common thin clients today run on Linux, Windows Embedded and a rare few on Windows CE operating systems. These usually contain multiple VDI connection brokers (responsible to establish remote connection) managed with a central utility, and are very flexible.

Zero Client:

Instead of an Operating System, these have an onboard processor, where most of the decoding and processing takes place on the dedicated hardware. This eliminates the need of a standard CPU or GPU set-up. These systems have the boot-up speed of a few seconds which enhances the overall user experience with minimal delays. The Zero Clients are specifically designed for a specific protocol, most commonly PCoIP, HDX or RemoteFX, with the ones deploying PCoIP having exceptional video graphics. They run on one or two connection types at maximum, for example, Citrix or VMWare.

What makes up this VDI??

The VDI Architecture can be well understood as the sequential collaboration of the following necessary components:

  • A Physical Server- This is the host which will house the complete data environment.
  • Hypervisor- It is the virtual machine manager. It creates the multiple virtual machines in the server and hosts those.
  • Connection Broker- The desktop agent which makes use of certain protocols to create and maintain the remote connection with the client machine.
  • Endpoint Device- The client machine or PC, through which he can access the host.

The Major Players in the world of VDI:

There are a limited few names in this area, as the providers for VDI architecture. These are the companies with the prevalent hypervisor platforms to create the virtual machines and responsible for installing the virtual desktops:

  • XenServer from Citrix
  • vSphere from VMware
  • Hyper-V from Microsoft
  • KVM from Redhat 
  • VirtualBox from Oracle
  • Parallel hypervisor from Parallels

Then comes the connection broker. It establishes the connection between the hypervisor’s virtual machines and the client’s operating system. For connection brokering services, View from VMWare is a preferred name.

The protocols are designed to regulate the working of these services, which involve the following major companies:

  • Microsoft: Remote Desktop Protocol (RDP) + RemoteFX
  • Citrix: Independent Computing Architecture (ICA) + HDX
  • VMware/ Teradici: PCoIP
  • Redhat: Simple Protocol for Independent Computing Environments (SPICE)
  • HP: Remote Graphic Software (RGS)
  • Ericom Blaze: Blaze
  • Dell Quest: Experience Optimized Protocol (EOP)

Which One Is Preferred – Thin Client or Zero Client??

The zero clients are the recommended choice for various reasons, for which they are preferred over the thin clients.

In Comparison To The Thin Client Systems:

  • Lower Deployment Costs:
Most of the endpoint hardware as required in thin clients including processor, RAM and storage is ruled out, which brings down the deployment costs drastically.

  • Lesser Licensing And Maintenance:
There is no operating system at the client end, which reduces the licensing and maintenance costs. It also relieves the client from updating the software patches and drivers from time to time.

  • Negligible Power Consumption:
As against the normal usage of 15-30 watts consumed by the thin client systems, Zero Clients consume under 5 watts when fully active, and under 0.5 watts in sleep mode.

Which Zero Client To Look For??

Out of many big names in the market today, check for the one that offers following features:

  1. Open source hypervisor
  2. Free maintenance
  3. All in a Single Package
  4. Eco-friendly
  5. Can support needed industry applications
  6. Has an unlimited user number supported
  7. Is affordable

0 Comments

Best Cloud System For Your Company

3/10/2014

0 Comments

 
When there are too many choices offered by the technology, it is highly confusing which one to adopt. The companies cannot afford ignoring all, since these are the stepping stones to the futuristic computing era. The companies cannot afford picking any one at random, since the offered services are much varied and specialized in their own respects. So, how do the businesses know which is the best fit for their Organizations in this ever-changing computing time?

When replacing the legacy systems, robust and flexible infrastructure including as-a-Service business tools is the requirement. The current options give the choice for converged or engineered systems which can be altered and tailored as per the Organizational demands, or the cloud-like scaling models which can enable the Organizations to be a part of much wider network offering scalability. Even, the companies looking for excellent affordability with the existing applications may consider public cloud services. With these 3 available choices, it is good to look into the strengths and weaknesses of each technology:

  • Converged Or Engineered Systems:
The main focus of such systems is towards making the management activities easier. The existing computing systems are replaced by those with the fewer servers, which are converged for seamless administration. This reduces a lot of manpower. The IT units find it manageable to monitor a couple of servers more efficiently. Even the infrastructural costs for hardware purchase shrinks down significantly. But, there are high chances of heterogeneity in such systems. Since the engineering is implemented on the existing heterogeneous systems by different vendors, it is difficult to service a converged system when the need arises. Very often, the engineering which is made to suit the business operations gets incompatible to the technological up-gradation.

  • Private Cloud System:
The private cloud system works best with the converged environment. However, it is practically implemented on the high-volume servers which are built with racks and rows. This is because the incremental resource addition in these servers is easier than in the converged systems. The separate storage and network offers the flexibility for an easy addition. The major drawback with this system is about the dependencies on physical systems. For example, if a storage system fails, the data stored on it pertaining to the virtual environment will be lost. Thus, there lays a dependency to back-up this data on some other physical storage medium.

  • Public Cloud System:
If the cloud services are best utilized for Infrastructure, Platform or Software as a Service platform, there are close to null issues left to be dealt with. But, the biggest question lies about the right kind of service. IT team is responsible to check if the cloud provider misses on any major service. Any shortage of resources directly impacts the business users’ experiences and the performance in operational data output.

What Do The Organizations Do?

In reality, the organizations have a mix of all the above platforms for its operations. It introduces more issues. While there are software solutions looking to resolve the platform specific problems from end-to-end, it is difficult to implement the resolve for a hybrid mix of such platforms. From an end-user perspective, response time is important. It is vendor’s business to offer the uninterrupted connect with the highest possible speed. How? The business users do not care for. Thus, it is recommended to have a single platform, which can be monitored with perfection, and the management in turn gets easier.

What Should Management Software Do?

It works as an agent that takes care of the implemented platform service from end to end. This includes the monitoring of the established connections, reporting of any under-speed connection, resource utilization, network congestion and the problems if any. In short, the software typically can be characterized with the following features:

  1. There must be several channels to resolve the issue. For example, for a congested network, the management software will be responsible to divert the data traffic to the low congested network. Alternatively, some of the load may be transferred to another virtual machine which can enhance the response time. As another resort, the management software may look into shifting the storage that creates more speed for processing. Thus, for a single problem, there should be multiple solutions.
  2. The existing system vendors are currently trying to have their products address the management issue. In parallel, there are data center infrastructure vendors who are adding monitoring and management facilities to their data centers. The business must take a decision on which one suits it the best.

The recommended vendors in the current market include major players like CA Technologies, IBM and BMC Software.

0 Comments

    RSS Feed

    Author

    Vaibhav O Yadav - Top IT professional. Worked with Fortune 100 company. I like to spread the knowledge. Analyse situation and environment with long term vision and solution. Firm believer of sustainable development for earth. Apart from from being geeky also like to play multitude variety of video games on greater stretch of time with favorite being economic simulation

    Categories

    All
    Android
    Android App Development
    App Development
    Apple
    Cloud
    Data Virtualization
    Data Warehouse
    Virtualization
    Wordpress

    Archives

    October 2014
    March 2014
    January 2014
    December 2013

    Tweets by @BlogOcean

Home

Blog

Guestbook

Contact
Follow @BlogOcean

© 2013 - 2014 BlogOcean HQ
All rights reserved.