: Follow us
Blog Ocean
  • Home
  • Blog
    • Technology
    • Women >
      • Contest
    • Fiction
    • Personal / Daily Life
    • News and Media
    • Sports
    • Religion
  • Guestbook
  • Contact

At What Level Should Data Center Security Be Set?

4/10/2014

0 Comments

 
There is a rather serious concern for IT Operations team and security team within an Organization, about setting security standards. The operations need an appropriate level of access to carry out their routine administrative tasks. The security teams resist providing the grants that can become a threat at any given point of time. It is hard to set a demarcating line which can separate the safe zone for applications which can be freely accessed. All the Organizational data is sensitive in a manner, all is confidential. Is there a way to resolve this conflict?

The data centers today have several choices, with the advantages and disadvantages of their own. Here is a summary of what each option can provide, and where it fails:

Group Logon IDs:

Most of the conventional systems have this set-up in place. A single logon ID is shared by an operational team, with the grants accessed to certain levels. The activities performed by this ID can be viewed, monitored and controlled easily. However, there is a limitation for the members to play around with the system, owing to the restricted access. This works well for sensitive data handling, but fails where a deeper understanding of the operating system is required.

Firecall IDs:

These IDs are similar to the Group logon IDs, but with more granularity. It is a common practice to grant Firecall ID to the team members who need an exceptional level of system access. There is more flexibility as well, associated to these IDs. While the logon can be performed by a single identification, there can be two Firecall IDs existing further to perform two different tasks. This offers various levels of security at different granular levels.

Defects Of Group Logon and Firecall IDs:

Both the set-ups have similar defects. While the auditing is easier and can be continually monitored, it is impossible to identify who accessed the system with the particular ID, if the situation demands so. The personal identification is lost and the root cause analysis gets tougher with this process.

Another major disadvantage with both of these IDs is when the emergency arises. Imagine a situation when the operational team is all ready to present a business report, and the group logon ID fails to work! The single ID allowing the access to the system when changed or forgotten all of a sudden may create havoc during run-time operations. The only resort leads the team to wait for the administrator to revoke the login.

With Firecall IDs, there is an additional requirement of approvals. If this involves too much documentation as per the company’s norms, the normal process gets delayed and can be utterly frustrating.

Alternate IDs:

With the specific monitoring grants provided in the form of normal IDs, if the security teams need to restrict any changes or updates to the existing system, there arises the concept of alternate IDs. The authoritative control is given to the alternate IDs, which can be used whenever necessary. This is an easier and more flexible manner to control any changes to the system, which is considered stable.

Alternate ID's provide more flexibility than firecall IDs or group IDs. They also grant faster access to update authority; however, they are harder to control.

With the efficiency of Alternate IDs understood, and the flexibility of the normal IDs used to the best possible extent, the data centers may adopt a few processes to ensure a seamless working. This bridges the concerning gap between security and operational teams, by granting needed security all the time to the right individuals, and also keeping the sensitive data under check.

The Recommended Processes:

  • Most of the application and system programmers are required to monitor the run-time environment. The ‘read’ access to them is all that is demanded at the basic levels of production tasks. This poses no security threat, and allows continuous monitoring as well. 
  • The software used to control Firecall and Group Logon IDs must have a mechanism that intimates the administrator as soon as the login fails. This ensures that the admin stays aware of situation and helps in logging into the system with a new identification, created with no delay.
  • There must be the highest level of authority that is provided with the complete system access. By ‘complete’, it means all levels of access to the system that can take control when the environment goes ‘bad’. This is very likely when the bigger Organizations work on overloaded servers too often.
  • The technical support team must be contacted as soon as the situation seems to slip out of control. At the least, this team can guide on how to preserve the data that is not yet lost.

0 Comments

Back To Basics Of VDI

3/10/2014

0 Comments

 
The VDI is being the topic of the current technological market, wherever you go. But do you know what VDI is and what makes it the choice of the current computing industry? Let us understand.

Transition from Terminal Server To VDI Architecture:

 Earlier, the Microsoft’s Remote Desktop Protocol in existence worked on connecting the client computer to the host server, which enabled the user to operate the host remotely. This was the Remote Desktop Service System, where one user operated at a time to get connected to the host. The host was called the ‘Terminal Server’ (TS) or the ‘Remote Desktop Server’ (RDS).

This system suffered from some serious privacy issues, and the multi user compatibility. This gave rise to Virtualization System, which divides the server host into various virtual machines (VMs), with an individual virtual desktop of their own. The multiple users get connected to the individual VMs. This architecture was referred to as ‘Virtual Desktop Infrastructure’ (VDI). The Desktop Virtualization can be associated with two types of end-point devices:

Thin Client:

Designed for VDI Architecture, these were the end point devices which typically ran on some specific Operating System. The most common thin clients today run on Linux, Windows Embedded and a rare few on Windows CE operating systems. These usually contain multiple VDI connection brokers (responsible to establish remote connection) managed with a central utility, and are very flexible.

Zero Client:

Instead of an Operating System, these have an onboard processor, where most of the decoding and processing takes place on the dedicated hardware. This eliminates the need of a standard CPU or GPU set-up. These systems have the boot-up speed of a few seconds which enhances the overall user experience with minimal delays. The Zero Clients are specifically designed for a specific protocol, most commonly PCoIP, HDX or RemoteFX, with the ones deploying PCoIP having exceptional video graphics. They run on one or two connection types at maximum, for example, Citrix or VMWare.

What makes up this VDI??

The VDI Architecture can be well understood as the sequential collaboration of the following necessary components:

  • A Physical Server- This is the host which will house the complete data environment.
  • Hypervisor- It is the virtual machine manager. It creates the multiple virtual machines in the server and hosts those.
  • Connection Broker- The desktop agent which makes use of certain protocols to create and maintain the remote connection with the client machine.
  • Endpoint Device- The client machine or PC, through which he can access the host.

The Major Players in the world of VDI:

There are a limited few names in this area, as the providers for VDI architecture. These are the companies with the prevalent hypervisor platforms to create the virtual machines and responsible for installing the virtual desktops:

  • XenServer from Citrix
  • vSphere from VMware
  • Hyper-V from Microsoft
  • KVM from Redhat 
  • VirtualBox from Oracle
  • Parallel hypervisor from Parallels

Then comes the connection broker. It establishes the connection between the hypervisor’s virtual machines and the client’s operating system. For connection brokering services, View from VMWare is a preferred name.

The protocols are designed to regulate the working of these services, which involve the following major companies:

  • Microsoft: Remote Desktop Protocol (RDP) + RemoteFX
  • Citrix: Independent Computing Architecture (ICA) + HDX
  • VMware/ Teradici: PCoIP
  • Redhat: Simple Protocol for Independent Computing Environments (SPICE)
  • HP: Remote Graphic Software (RGS)
  • Ericom Blaze: Blaze
  • Dell Quest: Experience Optimized Protocol (EOP)

Which One Is Preferred – Thin Client or Zero Client??

The zero clients are the recommended choice for various reasons, for which they are preferred over the thin clients.

In Comparison To The Thin Client Systems:

  • Lower Deployment Costs:
Most of the endpoint hardware as required in thin clients including processor, RAM and storage is ruled out, which brings down the deployment costs drastically.

  • Lesser Licensing And Maintenance:
There is no operating system at the client end, which reduces the licensing and maintenance costs. It also relieves the client from updating the software patches and drivers from time to time.

  • Negligible Power Consumption:
As against the normal usage of 15-30 watts consumed by the thin client systems, Zero Clients consume under 5 watts when fully active, and under 0.5 watts in sleep mode.

Which Zero Client To Look For??

Out of many big names in the market today, check for the one that offers following features:

  1. Open source hypervisor
  2. Free maintenance
  3. All in a Single Package
  4. Eco-friendly
  5. Can support needed industry applications
  6. Has an unlimited user number supported
  7. Is affordable

0 Comments

Best Cloud System For Your Company

3/10/2014

0 Comments

 
When there are too many choices offered by the technology, it is highly confusing which one to adopt. The companies cannot afford ignoring all, since these are the stepping stones to the futuristic computing era. The companies cannot afford picking any one at random, since the offered services are much varied and specialized in their own respects. So, how do the businesses know which is the best fit for their Organizations in this ever-changing computing time?

When replacing the legacy systems, robust and flexible infrastructure including as-a-Service business tools is the requirement. The current options give the choice for converged or engineered systems which can be altered and tailored as per the Organizational demands, or the cloud-like scaling models which can enable the Organizations to be a part of much wider network offering scalability. Even, the companies looking for excellent affordability with the existing applications may consider public cloud services. With these 3 available choices, it is good to look into the strengths and weaknesses of each technology:

  • Converged Or Engineered Systems:
The main focus of such systems is towards making the management activities easier. The existing computing systems are replaced by those with the fewer servers, which are converged for seamless administration. This reduces a lot of manpower. The IT units find it manageable to monitor a couple of servers more efficiently. Even the infrastructural costs for hardware purchase shrinks down significantly. But, there are high chances of heterogeneity in such systems. Since the engineering is implemented on the existing heterogeneous systems by different vendors, it is difficult to service a converged system when the need arises. Very often, the engineering which is made to suit the business operations gets incompatible to the technological up-gradation.

  • Private Cloud System:
The private cloud system works best with the converged environment. However, it is practically implemented on the high-volume servers which are built with racks and rows. This is because the incremental resource addition in these servers is easier than in the converged systems. The separate storage and network offers the flexibility for an easy addition. The major drawback with this system is about the dependencies on physical systems. For example, if a storage system fails, the data stored on it pertaining to the virtual environment will be lost. Thus, there lays a dependency to back-up this data on some other physical storage medium.

  • Public Cloud System:
If the cloud services are best utilized for Infrastructure, Platform or Software as a Service platform, there are close to null issues left to be dealt with. But, the biggest question lies about the right kind of service. IT team is responsible to check if the cloud provider misses on any major service. Any shortage of resources directly impacts the business users’ experiences and the performance in operational data output.

What Do The Organizations Do?

In reality, the organizations have a mix of all the above platforms for its operations. It introduces more issues. While there are software solutions looking to resolve the platform specific problems from end-to-end, it is difficult to implement the resolve for a hybrid mix of such platforms. From an end-user perspective, response time is important. It is vendor’s business to offer the uninterrupted connect with the highest possible speed. How? The business users do not care for. Thus, it is recommended to have a single platform, which can be monitored with perfection, and the management in turn gets easier.

What Should Management Software Do?

It works as an agent that takes care of the implemented platform service from end to end. This includes the monitoring of the established connections, reporting of any under-speed connection, resource utilization, network congestion and the problems if any. In short, the software typically can be characterized with the following features:

  1. There must be several channels to resolve the issue. For example, for a congested network, the management software will be responsible to divert the data traffic to the low congested network. Alternatively, some of the load may be transferred to another virtual machine which can enhance the response time. As another resort, the management software may look into shifting the storage that creates more speed for processing. Thus, for a single problem, there should be multiple solutions.
  2. The existing system vendors are currently trying to have their products address the management issue. In parallel, there are data center infrastructure vendors who are adding monitoring and management facilities to their data centers. The business must take a decision on which one suits it the best.

The recommended vendors in the current market include major players like CA Technologies, IBM and BMC Software.

0 Comments

An Ideal Enterprise Application Platform

14/3/2014

0 Comments

 
The business critical applications are considered primary in any enterprise. While the complete development environment uses hundreds of methods and platforms for its processes, the companies still focus on allocating its major expenditure in making sure that the business applications run uninterruptedly. This calls for a very strong business platform for operational purposes.

When choosing an ideal Enterprise Application Platform (EAP), there are a few important factors to consider:

Class Loading System:

A modular class loading system boosts up the overall performance. Depending upon the usage, only the classes required are filtered and loaded for an efficient use of system resources. This keeps the used APIs exposed, while the others remain secured and unexposed. The recent EAPs are offering the complete modularity to its class load systems.

Easy Administration:

The efficiently designed enterprise application platforms can be fully managed through web interface. The web based management consoles and APIs make the admin access easy and reduce delay. Automation processes have also contributed to the administration ease by the batch scripts, which can be scheduled for the business applications processing.

Domain Management:

The resources can be best utilized when shared. Same applies to the domain. There are platforms which enable the configuration to propagate from the central server to other physical servers. This is done by the managed domains which combine various servers across the Organization. Such a system is confined, easily managed and inter-related.

Server Restart:

For both the development and production environments, the time delay spent in restarting the servers can be frustrating. A good platform considers this factor as an important one. This becomes more crucial when the server resources very often reach the maximum, and a reboot is necessary to restart the complete operation.

Multiple Language Support:

There is practically no limit to the geographical boundaries today, when the businesses intend to expand themselves globally. This means more and more business users logging into the applications. An ideal Enterprise Application Platform considers this to offer maximum support in the maximum languages possible. At the least, the basic know-how of applications usage must be available as the web-based documents in multiple international languages.

To achieve this, the companies depend on the freshly laid concept of Application Server Software Platform (ASSP).

What Is Application Server Software Platform (ASSP)?

It is a middleware which serves application logics and offers some specialized services. These services allow an application to be deployed and managed effectively. Managing the users’ connectivity, database servers and run-time applications in an environment is the job of an application server. In a distributed system, this plays a big role for the developers. Using an application server provides access to the developers to use applications lying anywhere in the network.

For the critical run-time activities like transaction processing, the application servers handle the scalability and availability better. It reduces the need to code every single activity at the developer’s end. The optimized and efficient applications residing in the server serve the purpose in a robust manner, with an easy administration at hand.

Application Server Software Platforms Today:

Most of the application platforms today are distributed due to various reasons. These are cost-effective, easy to manage, robust and tested against various systems. This encourages the companies to use the resources effectively, as the technology moves ahead with this middleware as an integral part. The need to synchronize, control and co-ordinate better is the key behind the deployment of application server software platform. Here are the current trends which force the businesses to be a part of this technology:
  • The application server middleware is usually pre-loaded with the necessary applications that the company looks for. The inclusion of this platform offers a massive scalability which is tested against various similar environments, and is thus reliable.
  • In the last 3 years, the ASSP market has seen a dramatic figure as the sales numbers continue to rise. This is expected to grow further with businesses eyeing on internationalization and expansion.
  • With the cloud applications to drive the near future, it is a highly economical way for the companies to be a part of a wide network. The cost shared for each hosted application is a benefit to every company, especially the mid-sized and smaller ones.
  • Many of the cloud applications are very new, and not much tested beforehand. With a lesser support and limited insight available, it is better for the Organizations to stay updated as a part of the network, which ASSP serves easily.
  • The social business opportunities are another major reason to deploy application servers that can help significantly in growing businesses.

0 Comments

Avoid The Most Common Misrepresentations In Data Visualization

14/3/2014

0 Comments

 
Around 20 years back, the charts were the prime conveyor for business ideas. Meetings would be held with important delegates, and each of them would have their ideas represented in words or pictures. They would present the slides, share the thought, discuss the subject and the decision would be taken in favor of company’s growth.

20 years hence, we stand in the middle of high technology machines. There are machines everywhere, even in the meeting rooms. The attendees carry their ideas in their latest gadgets, present them on everybody’s screen with a single click, the discussions are almost non-verbal with fingers tapping quickly, meetings are held across the globe, all through the connected intelligent systems, and the decisions are taken.

There is a lot which has changed. But there is something that remains same in its fundamental functionality too- the charts. Earlier, the charts were shared on papers and slides, now those are drawn with different tools on intelligent machines- laptops, mini gadgets, and smart phones. But there still hasn’t been a replacement to the chart system when it comes to conveying the thoughts, especially the numbers, comparisons and checklists.

So, the charts have their own effectiveness for businesses. The decisions are taken based on the facts shown on charts which are tool designed, even automated sometimes. And many a times, the business decisions taken prove to be terribly wrong! So, when the business decisions go wrong, do the representations on chart owe the discredit?

Actually, they do… Here are the most common misrepresentations that deal to fatal decisions, owing to the lack of clarity into business standing.

1.       Incorrect Choice Of Colors:

When designing tools are used for creating a chart, there is a choice of at least million colors on the screen. Usually, the designer tends to pick the most attractive ones which create a visual impact. These colors however are critical in creating a distinction between the parameters. Rather than the most attractive, focus should be on enhancing the distinction which can bring clarity to the subject. Also, the color blind users should be considered. This takes one back to the era when the selection of colors used to be mild, distinct and limited.

2.       Wrong Usage Of Pie Charts:

A pie chart is a very efficient way to clearly bring out the comparison between many factors. Now, imagine a pie chart where the pie is represented by a thin line and the chart as a whole consists of many thin lines one after the other! There is nothing more confusing and dissatisfying than such a piece of pie. This signifies one major flaw- over flow of information, which shrinks down the pie to a single line.

Whenever drawn, the pie chart must consider the limited data sets. Instead of mixing two data sets, one must be represented to compare parts of it.  In fact, the choice of a pie chart is good whenever the data sets to be handled are fewer. A good way of comparison is to order the slices of pie from largest to smallest.

3.       Information Clutter:

Are you clear what you want to convey as a single precise point? Every day, across the world, millions of businesses feel that the meetings they organize regularly deviate from the track, and the purpose of the entire formal get-together gets defeated. Be careful that the topics being covered in your representation are completely relevant to the subject. Any unnecessary detail may confuse the audience, or worse, force the entire flow of the meeting in an unwanted direction. The limited information with the needed structural representation is an extremely efficient tool in conveying the message.

4.       Inefficient Design:

In Steve Jobs words, “Design is not what it looks like and feels like. Design is how it works.” A beautiful design that is incapable of conveying the idea in clarity and entirety is ineffective. Business demands effective design, not the beautiful design. Another key factor to designing is the communication skill. A developer who knows what s to be conveyed will be able to prepare the visualization that is effective. Behind unattractive colors, there would be substantial information which is vital to the business. And hence, the design passes the test.

5.       Data Issues:

After all, the complete scenario ends at the most important parameter that the business is interested in- Data. So, what if the data presented in the visualization is wrong? The Bad Data is not endurable. Any visualization must be thoroughly checked for the data issues before its presentation. Wrong information is no way related to data visualization, but can force all the designing efforts to drain if the data is incorrect.

0 Comments

Back Up Process For Virtual Servers

4/3/2014

0 Comments

 
When the server virtualization hit the market, users were immediately enthusiastic about this new technology. It was an immediate hit owing to the fantasizing features it offered. The companies started adopting the technology, starting with bigger enterprises, extending to the smaller ones when the processes became more cost effective and optimized. While the IT administration enjoyed the benefits of virtualization, backing up of the immense data emerged as a pain area for administrators. Over the initial few years when server virtualization just stepped in, data back-up posed the following issues:

Companies first tried to force fit their existing back-up processes on to the virtual servers. Typically, an agent was run on each virtual machine to have the back-up. It did the job, however, crippled the network system completely.

VMware Consolidated Backup (VCB) was a welcoming step in this scenario. It was an innovative method to have the data backed up efficiently, but the entire process added complexities to the system. There was time delay, cost factor and additional implementation steps which kept many businesses to indulge with this solution.

Many start-up ventures realized the opportunity and stepped in with their applications to back-up data from a virtualized platform. These were cost effective solutions for various forms of server virtualization, and the small to mid-sized companies immediately went ahead with these back-up techniques. The larger enterprises however stayed away. They had their own back-up solutions existing in multiple numbers, which resisted them adding another to their already huge and cumbersome backing systems.

Where Are We Now?

Today, there are some fantastic applications with individual features. The companies need to assess their requirements and look for the solution that best suits their demands. Vendors are generally perfecting their ways for the enterprise specific applications back-up solutions, rather than the generalized software for everyone. With the virtual machine focused as the data source, IT units must look into factors which distinguish these back-up solutions from each other.

Steps To Adopt The Right Back-Up Solution:

1.       List Your Needs
  • Current System Deficiencies: 
Checking the market for the good back-up solution indicates that the current back-up system does not work well. It is important to know in what manner it failed. This will dictate the features which should be checked in the new back-up system a company looks for. This is the first and the most vital step to start with.

  • Applications And Database Details: 
A catalog detailing the list of all applications and databases supported by the virtual machines will govern the decision of purchasing the appropriate back-up software. Many a times, the companies deal with the specialized databases and enterprise specific applications, which cannot be ignored when the data is being backed up. Having these secured in entirety is a fundamental need.

  • Legacy Applications:
Many enterprises still depend on legacy platforms which cannot be virtualized. This calls for specialized back-up solutions which can work well with both the physical and the virtual servers. It should be understood well before opting the new back-up software.

  • Periodic Or Continuous:
The back-up solution that is being deployed can be either continuous or periodic. This again depends upon the business requirements and the data flow which is supposed to be retained as back-up.


2.       Assess Functionality:
  • Set Priority:
Normally, all the back-up solutions from reliable vendors will supposedly perform well. The demarcation between two such comparable products however becomes crucial. One of those may be easy to use than the other. One of those may be faster in operation than the other. It is important to set the priorities when looking for the right product.

  • Support For Multiple Hypervisors:
If the businesses have deployed many hypervisors for its individual servers, the ideal solution will be the one that can work on multiple platforms. There need not be individual back-up systems for VMware and Hyper-V separately.

  • Consistency During A Crash:
The back-up solution’s primary job is to ensure that the data remains secured once migrated. What happens if there is a sudden crash during the process? Keep in mind that the data being talked about in the current generation is very huge, that forces a reliable solution to play the role. At no cost should the data be lost during the back-up.

In short, an ideal back-up solution for an enterprise should work well with not only the virtual servers, but also the physical servers if any. It is nice to have the existing vendor offer an extended technological assistance to have the smooth back-up process run on existing environment. When making the checklist before the purchase, consider if your opted back-up solution:
  • Is compatible to company’s infrastructure
  • Supports multiple hypervisors if your business demands so
  • Ensures data retention during a crash

And then go ahead with the best choice.

0 Comments

Service Virtualization- Bridging Gap Between Development And Operations

4/3/2014

0 Comments

 
The disconnect between Development and Operations team is as old as the software technology is. Both, the development and Operations teams speak their own languages. They tend to focus on their own confined areas of technology. Any meeting between the two happens whenever there is a delivery issue. Developers, who have been involved in the actual development of the applications, stay unaware of real time problems. Operations, which are supposed to handle real time users and business demands never intend to know the ‘how stuff works’ part. Result is a knowledge gap between the two.

The Intentional Disconnect

It may sound surprising, but the fact is that much of this disconnect resulted as an intention to solve a different problem. There was an unexplained understanding issue that existed between business and development teams during initial days of software development. The experts tried to bridge this by cultivating business understanding for the developers. This helped to a great extent. Development team started looking into business requirements from users’ perspective and translated those to technical terms. However, as the developers got skilled in business part, they moved farther from Operations understanding. Understanding of servers, networks, load and balancing was out of their concern.

These ignored factors somehow form the most important fundamentals on which Operations team build their activities. The incompatibilities between the applications when run on real time servers grew large. Almost every big software enterprise today faces the issue that ranges between problematic to severe.

An Unbelievable Amount Drained To No Use

When the production is huge, the companies cannot afford the incorrect deliveries and delayed reports due to application incompatibilities. Typically, the businesses spend hundreds of billions of dollars annually in streamlining the applications testing, formulating new plans to ensure quality and scheduling regular meetings between the development and operations teams. Unfortunately, this does not work very well. Since the real time system cannot be replicated in development environment, Quality Assurance has a limited role in controlling the incompatibilities and slow performance of applications in Operations.

Similarly, the regular meetings between the two teams focus more on clarifying their individual problems, rather than understanding each other’s view point.  There is no practical way in which the developers will understand the language of loads on servers. The maximum they have done is perfecting their ways in optimizing their own programs. This retains the problem, with a huge cash flow spent to no avail.

DevOps As A Strategy

DevOps as a term brings development and Operations teams together. It is a recent strategy that is upcoming and forming its way into most of the very well known multinational companies in America. The concentration is on forming an AdHoc team that works as a bridge between development and production environments. This requires the individuals with a sound understanding of both environments.

But the experts do not rely solely on human understanding. As soon as the concept came to existence, the research process brought the most recent form of virtualization to solve the problem through business tools. And here is the gift of Service Virtualization that addresses the concern appropriately.

What Is Service Virtualization?

The basis of service virtualization is on simulating the production environment that can be used for testing purposes. The platforms may differ to suit various technologies, while the essence remains same. On a high level, this virtualization can be characterized with following features:

  • Monitor And Capture The Environment Details:
There are many ways to identify how an application behaves on the production environment. This information can be checked through reports, logs or continuously monitored server responses. With this information as base, the environment is simulated to be used for testing purposes. This is a more effective way of realizing the real time data, where the response details are more realistic than can be imagined on a development server.

  • Complexity Poses No Issue:
Service virtualization performs great even for very complex systems. The real time environment which is too complex to be implemented for developments can be easily simulated with needed details and close to real information.  This enables the developers to experience their applications running on a much wider span. Testing in this manner exhibits the performance issues if any. In brief, it offers the development team to look into the problem with a more practical point of view.

  • Resource Understanding:
The technique of service virtualization uses the similar sets of protocols and libraries which are being used on Operations. Since simulated from Operations, the details are always updated to be referred. This solves the disconnect of driving input details between the two environments.

Service Virtualization is a big leap in tackling the concern, which the Organizations have faced for many decades. In future, with more efficient business tools and more optimized ways of deploying the service, almost all the companies are expected to benefit from the approach.

0 Comments

How To Keep Storage Consumption Down For A VDI Environment

3/3/2014

0 Comments

 
The benefits of virtualization are getting better known with time. Listing down the advantages of a Virtual Desktop Infrastructure (VDI) over traditional PC environments may take a few minutes for the enterprises which have opted VDI implementations. The storage managers and IT administrators are happy about the simplicity in management and maintenance of hosting servers. However, with the increase in potential user connections that every VDI manufacturer wants to enhance, there comes a risk of storage capacity. The potential threat in the near future is about two factors related to managing the storage in a VDI environment- cost and performance.

Understand Network Storage Tiering

The storage process in a VDI environment is entirely different from how it is in a PC environment. While the virtual machines can be created in minutes and user interface gets active even faster, there are certain rules governing the storage. There is a concept of Network Attached Storage (NAS) decides being leveraged by many businesses today. It enables the administrators to make use of the network storage, while they cache the very crucial and urgent data on their devices, ahead of the schedule. For NAS devices, the write I/O load is considered more prior compared to the Read I/O load. This means that as the user number increases beyond a certain limit and the shared resources reach approximately 40% of the utilization, the Write I/O activities are queued in front. This degrades the Read I/O performance and the user experience as well.

To tackle this, there is a solution coming up in the form of caching strategies. Employing a front end caching solution that can have the Read I/O activities cached on to a local storage magnifies the performance by a multitude. There are companies currently working on this regard, with a hopeful indication of a successful storage caching for VDI environments.

Benefits Of Network Storage Tiering

There are two great benefits that can be immediately understood. One is about having a low cost solution which can store Write I/O workloads effectively. This dramatically reduces the storage cost when the data size is considered on an organizational level. For smaller enterprises, the benefits are lesser realized, since the storage proves to be enough for the amount of data being dealt with, everyday.

Another benefit is in terms of using the inactive data sets efficiently. The Read I/O data sets which are inactive, and yet consume a good amount of space, can be moved on to cache storage for the performance enhancement. Depending upon how critical these data sets are, the administrators and the storage managers may opt for further sage of the cached component.

Space Saving Technologies In A VDI Environment

Considering the wide span of usage that is behind deployment of VDI, any technique that can save the storage space consumption is highly welcome. Currently, there are two such techniques that fit well for VDI deployments:

1.       Thin Provisioning:

This technology concentrates on the total volume of data that can be held by the host server, irrespective of the data allocation done for the client machine. For example, if the client machine is a desktop allocated with 30 GB of data, and the host writes 15 GB of data on the disk, the total allocation that appears to the desktop is limited to 15 GB. In a VDI environment, this is an efficient way to keep the resources under the specified limit of usage, while exhausting them to the fullest extent. This reduces any kind of resource wastage, and lets the client machine experience an uninterrupted performance with the controlled volume of data transfer.

2.       De-duplication:

The base of this technology goes back to the time when desktops were being improvised. Usually, the desktops in an enterprise were created using a master image. This image when replicated to create multiple desktops suffered with resources duplication, which was nothing more than the unneeded consumption of space. To deal with this, the de-duplication strategy was deployed that made use of linked clones and images. All of the desktops then shared the common files through the linked clones. In a VDI environment, the same technique works well to clone the storage array. Since, most of the content is the static operating system data,  de-duplication through cloning is an excellent way of reducing space.

What Is Good- SAN or NAS?

Let us look at the basic difference. The SAN disk stores data at the block level. NAS stores data at the file level. Both of these storage schemes are good for VDI, though, the implementations differ. In block based solutions (SAN), the VDI layer takes care of cloning and de-duplication, while in a file system (NAS), this is performed by the storage array. Considering this distinction, the decision may be taken on where the user wants the task to be delegated.

0 Comments

Factors Ensuring True Mobility In VDI

3/3/2014

0 Comments

 
The BYOD (Bring-Your-Own-Devices) policy is gaining ground rapidly. Enterprises have started looking beyond the office premises to encourage employees connect from anywhere. There is no doubt that VDI plays an important role in boosting BYOD. Server connection on the move, with the small devices acting as interfaces has changed a complete technological era. VDI serves as the platform where such a technological advancement has been made possible. This saves a lot for many businesses in terms of infrastructural expenditures. Any new start up considers VDI deployment as the potential choice for its set-up.

Failure Of ‘One-Size-Fits-All’ Approach:

In all this rustle bustle of the virtualization and BYOD becoming dominant, there have been challenges which VDI experts took very seriously. Earlier, the companies involved in virtualization attempted a thoughtful approach towards devices usability. It was called ‘one-size-fits-all’. This means that the applications were made compatible to the users’ devices, irrespective of what these devices were. It simplified the manufacturer’s tasks.  With the number of connecting devices increasing everyday in number, it was a nightmare considering individual specifications for individual devices varying in size. The enterprises were happy about it when they suddenly realized that one crucial factor was deeply hampered with this approach- ‘Performance’.

Quite understandably, the performance degraded when the specifications were ignored. Content readability suffered to a great extent. Considering the BYOD implementation across the Organization, this approach ad to let go, and the VDI experts reconsidered the decision.

Vital Factors To Ensure True Mobility:

When the companies plan to enforce the BYOD policy, there are a few considerations which turn serious about the approach. The complete VDI environment is very different from the conventional desktop environment. A user on the move wants to stay independent, and in full control of the certain applications which relate to his official activities. There are following factors that must be thought of if true mobility is to be supported:

Application Hosting:
  • App Hosting Through Mobile:
Enabling a user host his own applications independent of the desktop solves the problem of performance degradation. Confined within certain rights, the application hosting also provides the app isolation to stay secured from the external intrusion.

  • Optimal Resource Utilization:
Most of the application vendors think of Windows as they design their applications. This includes the official applications as well. The application hosting must be compatible on a Windows server to utilize the resources in the best possible manner.

  • Optimization Of Existing Windows Apps:
Many of the already existing Windows applications may open up a way to realize the employee needs. Understanding what applications relate to the requirements is an easy initiation towards the right app. There are practically thousands of applications today being offered for educational purposes. The same if implemented at enterprise level can get the application optimization to its best.


Integrated Networking:
  • Any Location, Anytime:
The employees must be able to connect easily, from anywhere, at any time. The solution that one implements must take care of this.

  • No Configuration:
To have the connection set-up, all that one desires is zero configuration set-up. Typing a set of specific entries every time the connection is made is extremely frustrating. The devices must support the easy access which is smart and intelligent to have the system automatically configured irrespective of the location.

  • Control And Security:
The granular level at which the control is offered is to be decided by the IT team. While the specific administrative rights reside within the device owner’s control to manage his own apps, IT is responsible to monitor if there is a threat of security breach.

  • Multimedia Redirection:
The multimedia and videos may create an extensively heavy load on the network. A redirection technology can help with this by using playback option installed locally on the device.


App Management And Service Delivery:
  • On-Device Support:
The solution that is implemented must help the device owner manage his own applications and resolve the minor issues easily and quickly.

  • Interoperability:
An interoperable solution that can work with any hypervisor, ay virtualized desktop or any cloud infrastructure is the foremost demand. There must not be major configuration changes, every time the employee changes the project within the enterprise.

  • Performance Monitoring:
An efficient solution must track the applications performance and resource utilization on the device. This is helpful for the IT teams during initial implementations, to track down the applications that pose problems.

Wrapping up the subject, true mobility can be achieved if the enterprises take not only VDI, but also the complementary technologies to consideration. Looking from the bigger picture, the initial investment costs will be quickly compensated by the infrastructural savings through BYOD approach. In a long run, one may envision a world where businesses never worry about the capital expenditure on desktops, installations and maintenance costs.

0 Comments

Understanding Load On Wireless Connections

3/3/2014

0 Comments

 
Internet is everywhere today. The ordinary mobiles have gone obsolete. With the increase in usage of electronic gadgets, smart phones, newer deployments by enterprises, the smart technology has forced the consumers to evolve from normal broadband usage to a much higher level of connectivity. Apart from the data transmission, video and voice transmission places its own demands on the existing broadband systems. And in this entire scenario, the users desire to be connected without the burden of a cord hanging to their devices. That is, they want to have an uninterrupted wireless connectivity, irrespective of which device they use, where they move and what they do with that.

One prime immediate need faced by the wireless companies is about offering faster internet connection. This is a never ending demand. All sorts of users want the transmission to have no delay, and they wish this for all kinds of data. While the text and voice can be handled with ease, the videos cause a considerable delay, even on very fast networks. The current developments are around this field. Internet providers are focusing on tackling the video and animation traffic, with an attempt to reduce the delay during transmission.

The wireless industry of current generation responded for the requirement of faster connection brilliantly. There is a new wireless platform on which the industry trend sets up for future. It is called ‘Long Term Evolution (LTE)’. The mobile service providers across the world are adopting the technology. It is favorable to the existing service providers, since it can use the existing radio infrastructure for its working. With an exception of a few rural areas where the radio technology is yet to evolve, LTE is expected to take over very soon, before the generation advances to the next stage.

The Basis Of Faster Connection

The nearer is the receiving antenna to the user or consumer, the faster is the connection. For the highest data speed, a worthy strategy being adopted is to shift to a ‘mesh’ or nodal infrastructure. To implement this throughout the network span, there are more sites needed. Each site takes care of servicing multiple antennas. This effectively reduces the overall build of the infrastructure. These cell sites are usually low powered, and the companies have estimated considerable savings over power consumption with the deployment of such cellular sites.

With the low powered factor, there is another area in electronics data transmission that comes to play. Lesser power is related to higher density data transmission, which needs a remotely located base transceiver system (BTS). Both the reduced power and remote location of BTS become vital in contributing towards the density of transmission, which is an essential approach for the nodal systems to work correct.

Inter-dependencies Of Systems

One cannot imagine the data, voice and video transmitting systems existing separate today. They are converging together, and so are the various modes of transmitting the signals. The wireless, wire line and cable networks are joining hands to offer uninterrupted, uncompromised and fast transmission quality by aligning with each other. For a higher efficiency, more data can be pushed to the wired networks, and the part of it can be handled by the wireless. This keeps the fusion of networks alive. If LTE is gradually adopted across the globe, a completely different topology is expected to emerge.

Broadband Initiative

Currently, the up gradation to LTE is a costly affair for the small internet service providers, considering the cost of initial set-up. It is being adopted by a few of the bigger names at present, and has already established a good base in many of the prominent states in U.S.

However, the Federal Government in many states have involved into the ‘Broadband Initiative’, a drive that an enable all small and mid-sized internet businesses to avail loans at very low interest rates. This is in an attempt to encourage the LTE set-up across the nation. A high technology country needs to depend on the internet for almost everything. There must be a balance considered between initial costs, debt and profitability, which will make the companies take a logical decision. If the peer pressure increases, the businesses will be forced to implement LTE technology sooner. Without pressure however, there is a possibility that the companies will wait for further cost optimizations.

Fixed mobile broadband is overtaking the wired networks, and LTE is supposed to offer a smooth transition to this process. The amount of data transmitted on the smart mobile phones has long surpassed the amount of voice, and this is one reason that the move to faster connection cannot be delayed for too long.


0 Comments
<<Previous

    RSS Feed

    Author

    Vaibhav O Yadav - Top IT professional. Worked with Fortune 100 company. I like to spread the knowledge. Analyse situation and environment with long term vision and solution. Firm believer of sustainable development for earth. Apart from from being geeky also like to play multitude variety of video games on greater stretch of time with favorite being economic simulation

    Categories

    All
    Android
    Android App Development
    App Development
    Apple
    Cloud
    Data Virtualization
    Data Warehouse
    Virtualization
    Wordpress

    Archives

    October 2014
    March 2014
    January 2014
    December 2013

    Tweets by @BlogOcean

Home

Blog

Guestbook

Contact
Follow @BlogOcean

© 2013 - 2014 BlogOcean HQ
All rights reserved.