Thursday, August 27, 2015

Three critical considerations when optimizing infrastructure for application performance

You need a vendor–neutral, unbiased understanding of system-wide performance, and accurate analytics to support and inform immediate action

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Overprovisioning has been the go-to approach for ensuring infrastructure and application performance. But when performance degradations and unplanned outages occur, even the most experienced teams move into “react-and-guess” mode.

Where to start? Every level of the infrastructure stack comes with its own possible issues, and tracking the culprit down takes time. And with IT infrastructures growing at an exponential pace and workloads to the cloud, the typical approach of overprovisioning and reacting-and-guessing is no longer a viable option.

There are three steps IT professionals can take to prevent emerging issues from becoming recurring problems that impact performance and productivity:

* Understand the system’s history, in addition to its present. Understanding how an infrastructure arrived at its current state will provide a clearer picture of what has been integrated throughout the system and the purpose of each component. Each part was put in place for a specific reason. Every application, whether on-premise or hosted, comes with its own dependencies. Patching together the history of the IT infrastructure will help you understand exactly what you are dealing with and why.

It will also give you an idea of the problems the system experienced in the past, which will help you detect issues more quickly. Auditing critical IT infrastructure is another process that helps teams benchmark systems and identify areas that may call for upgrades or more efficient processes. Knowing precisely which application workloads an infrastructure is supporting helps you detect wasteful assets and plan for the necessary size and scale of future deployments.

* Focus on the end user, in both the near and long term. Guaranteed availability isn’t just about alleviating IT headaches. Frequent latency delays are frustrating for users, and in the end, user issues matter far more than internal frustrations. Overprovisioning is no longer tenable given the explosive infrastructure growth, and there is a clear mandate to maximize existing assets.

What’s more, while overprovisioning does take into account workload fluctuations to ensure enough capacity to deliver a good end-user experience, it ties up resources that could be used for valuable new applications, products or services. Understanding traffic patterns, in terms of behavior during peak periods and the tasks that need to be completed during those high-demand times, will help you provision appropriately and ensure all critical workloads function properly.

* Use performance monitoring solutions that integrate with disparate environments. Assessing performance requires a solution that analyzes system-wide health, utilization and performance to identify issues that may increase latency. There are a number of technologies available that attempt to solve this puzzle, such as enterprise systems management (ESM) and network performance management (NPM) tools. However, these monitoring platforms were developed before data centers became as virtualized and heterogeneous as they are today.

With the disparate systems working together in enterprise environments, an understanding of the way these solutions and systems collaborate is critical. Vendor-neutral IT monitoring and management technologies enable workers to measure the outputs and activities in cloud, virtual and on-premise applications from different vendors.

This integration of performance standards should also be reflected in a company’s service-level agreements (SLAs); as each component in an IT infrastructure has come to overlap so heavily with the rest, isolating each element in siloed SLAs no longer makes sense. Rely on SLAs that look at your infrastructure as the holistic entity it is, and focus on performance, not just availability.

The pace of IT demands agility, accuracy and answers that drive optimal performance at all times, but guaranteeing performance isn’t easy. It requires a vendor–neutral, unbiased understanding of system-wide performance, and accurate analytics to support and inform immediate action. All of this starts with a better comprehension of IT infrastructure assets, which only becomes more crucial as additional investment of resources, both financial and personnel, becomes necessary.

Answers are the silver bullet in the modern IT landscape, and they’re not only about the data stored in an application infrastructure, but how that data is correlated and analyzed to deliver value. All of the knowledge gained from an infrastructure’s operation is significant to the ultimate success of the business, and the companies that take proactive steps toward gaining those insights will be the ones that find themselves ahead of the curve.

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Friday, August 21, 2015

Microsoft fires back at Google with Bing contextual search on Android

"Snapshots on Tap" echoes a feature coming with the next version of Android

Microsoft has pre-empted a new feature Google plans to include in the next version of Android with an update released Thursday for the Bing Search app that lets users get information about what they're looking at by pressing and holding their device's home button.

Called Bing Snapshots, the feature is incredibly similar to the Now on Tap functionality Google announced for Android Marshmallow at its I/O developer conference earlier this year. Bing will look over a user's screen when they call up a Snapshot and then provide them with relevant information along with links they can use to take action like finding hotels at a travel destination.

For example, someone watching a movie trailer can press and hold on their device's home button and pull up a Bing Snapshot that will give them easy access to reviews of the film in question, along with a link that lets them buy tickets through Fandango.

Google Now On Tap, which is slated for release with Android Marshmallow later this year, will offer similar features with a user interface that would appear to take up less screen real estate right off the bat, at least in the early incarnations Google showed off at I/O.

The new functionality highlights one of the major differences between Android and iOS: Microsoft can replace system functionality originally controlled by Google Now and use that to push its own search engine and virtual assistant. Microsoft is currently beta testing a version of its virtual assistant Cortana on Android for release later this year as well.

A Cortana app is also in the cards for iOS, but Apple almost certainly won't allow a virtual assistant to take over capabilities from Cortana, especially since Google Now remains quarantined inside the Google app on that mobile platform.

All of this comes as those three companies remained locked in a tight battle to out-innovate one another in the virtual assistant market as a means of controlling how users pull up information across their computers and mobile devices. For Microsoft and Google, there's an additional incentive behind the improvements: driving users to their respective assistants has the potential to boost use of the connected search engines.

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Thursday, August 13, 2015

Dropbox security chief defends security and privacy in the cloud

Patrick Heim is the (relatively) new head of Trust & Security at Dropbox. Formerly Chief Trust Officer at Salesforce, he has served as CISO at Kaiser Permanente and McKesson Corporation. Heim has worked more than 20 years in the information security field. Heim discusses security and privacy in the arena of consumerized cloud-based tools like those that employees select for business use.

What security and privacy concerns do you still hear from those doing due diligence prior to placing their trust in the cloud?
A lot of them are just trying to figure out what to do with the cloud in general. Companies right now have really three choices, especially with respect to the consumer cloud (i.e., cloud tools like Dropbox). One of them is to kind of ignore it, which is always a horrible strategy because when they look at it, they see that their users are adopting it en masse. Strategy two is to build IT walls up higher and pretend it’s not happening. Strategy three is adoption, which is to identify what people like to use and convert it from the uncontrolled mass of consumerized applications into something security feels comfortable with, something that is compliant with the company’s rules with a degree of manageability and cost control.

Are there one or two security concerns you can name? Because if the cloud was always entirely safe in and of itself, the enterprise wouldn’t have these concerns.

If you look at the track record of cloud computing, it’s significantly better from a security perspective than the track record of keeping stuff on premise. The big challenge organizations have, when you look at some of these breaches, is they’re not able to scale up to secure the really complicated in-house infrastructures they have.

We’re [as a cloud company] able to attract some of the best and brightest talent in the world around security because we’re able to get folks that quite frankly want to solve really big problems on a massive scale. Some of these opportunities aren’t available if they’re not in a cloud company.

How do you suggest that enterprises take that third approach, which is to adopt consumerized cloud applications?
The first step is through discovery. Understand how employees use cloud computing. There are a number of tools and vendors that help with that process. With that, IT has to be willing to rethink their role. Employees should really be the scouts for innovation. They’re at the forefront of adopting new apps and cloud technology. The role of IT will shift to custodian or curator of those technologies. IT will provide integration services to make sure that there is a reasonable architecture for piecing these technologies together to add value and to provide security and governance to make sure those kinds of cloud services align with the overall risk objectives of the organization.

"If you look at the track record of cloud computing, it’s significantly better from a security perspective than the track record of keeping stuff on premise."

Patrick Heim, Head of Trust & Security, Dropbox

How can the enterprise use the cloud to boost security and minimize company overhead?
If you think about boosting security, there is this competition for talent and the lack of resources for the enterprise to do it in-house. If you look at the net risk concept, where you evaluate your security and risk posture prior to and after you invest in the cloud, and you understand what changes, one of those changes is: what do I not have to manage anymore? If you look at the complexity of the tech stack, there are security accountabilities, and the enterprise shifts the vast majority of security accountabilities on the infrastructure side to the cloud computing provider; that leaves your existing resources free to perform more value-added functions.

What are the security concerns in cloud collaboration scenarios?
When I think about collaboration especially outside of the boundaries of an individual organization, there is always the question of how do you maintain reasonable control over that information once it’s in the hands of somebody else? There is that underlying tension that the recipient of that shared information may not continue to protect it.

In response to that, there is ERM, which provides a document-level control that’s cryptographically enforced. We’re looking at ways of minimizing the usability tradeoff that can come with adding in some of these kinds of security advancements. We’re working with some vendors in this space to identify what do we have to do from an interface and API perspective to integrate this so that the impact on the end user for adopting some of these advanced encryption capabilities is absolutely minimized, meaning that when you encrypt a document using some of these technologies that you can still, for example, preview it and search for it.

How do enterprises need to power their security solutions in the current IT landscape?
When they look at security solutions, I think more and more they have to think beyond the old model of the network parameter. When they send data to the cloud, they have to adopt a security strategy that also involves cloud security, where the cloud actually provides the security as one of its functions.

There are a number of cloud-access security brokers, and the smart ones aren’t necessarily sitting on the network and monitoring, but the smart ones are interacting, using access and APIs, and looking at the data people are placing into cloud environments, analyzing them for policy violations, and providing for archiving and backup and similar capabilities.

Security tools that companies need to focus on could be oriented to how these capabilities are going to scale across multiple cloud vendors as well as how do I get away from inserting it into our network directly and focus more on API integration with multiple cloud vendors?

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Wednesday, August 5, 2015

So long (Vista), it's been good to know yah

Windows 8's predecessor in Microsoft's every-other-OS-flops series now has a user share of just 2%

Windows Vista, the perception-plagued operating system Microsoft debuted to the general public in early 2007, has sunk to near insignificance, powering just two out of every 100 Windows personal computers, new data shows.

According to analytics provider Net Applications, Windows Vista's user share, an estimate based on counting unique visitors to tens of thousands of websites, stood at 2% at the end of July.

Vista has been in decline since October 2009, when it peaked at 20% of all in-use Windows editions. Not coincidentally, that month also saw the launch of Vista's replacement -- and Microsoft's savior -- Windows 7. Within a year, Vista's user share had slumped to less than 15%, and in less than two years fell below 10%.

Since then, however, Vista users have dragged their feet: The OS took another four years to leak another eight percentage points of user share. Projections based on the current average monthly decline over the past year signal that Vista won't drop under the 1% mark until April 2016.

Vista's problems have been well chronicled. It was two-and-a-half years late, for one. Then there were the device driver issues and ballyhoo over User Account Control (UAC). It was even the focus of an unsuccessful class-action lawsuit that alleged Microsoft duped consumers into buying "Vista Capable"-labeled PCs, a case that revealed embarrassing admissions by senior executives who had trouble figuring it out.

Even former CEO Steve Ballmer admitted it was a blunder. In a pseudo-exit interview in 2013 with long-time Microsoft watcher Mary Jo Foley of ZDNet, Ballmer cited Vista as "the thing I regret most," tacitly setting most of Microsoft's then-problems on the OS's doorstep, from its failure in mobile to the slump in PC shipments.

Those still running Vista -- using Microsoft's claim that 1.5 billion devices run Windows, Vista's share comes to around 30 million -- have been left out in the cold by Microsoft and its Windows 10 upgrade: Vista PCs are not eligible for the free deal.

It's actually good, at least for Microsoft, that Vista is on so few systems. The company will ship the last security updates for the aged OS on April 17, 2017, 20 months from now.

And there is a silver lining for Vista owners: At least their OS is more popular than Linux.

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com