Author: Jeff Choinski, Consultant
Symantec held their annual World Sales & Marketing Conference July 10-15, 2011, in Las Vegas. This year, they combined the partner training program with their annual system engineer training. It created an opportunity for partners to meet up and trade stories and also hear experiences from Symantec’s SEs.
This year’s main themes focused around the cloud, protecting virtual environments and how Symantec’s products fit into this ever evolving IT environment. Products such as NetBackup for VMware, helping protect a company’s virtual environment using features like Automated Image Recovery (AIR) and allowing for automatic detection of newly created guests to granular recovery and dedupe with their V-Ray product integration are just a few features Symantec has to offer. The “.cloud” product set is now enabling companies to offer backup and archive as a service offering, further maturing their “IT as a service” models, while at the same time improving availability and reducing costs. The appliance offerings in both the NetBackup and PureDisk space are giving organizations a scalable all-in-one solution to protect their data. Symantec has also expanded hardware configurations to the NetBackup appliance, accommodating network environments from 1G to 10G or to support their fibre channel infrastructure. ApplicationHA for VMware adds another level of protection for Windows and Linux VMs by providing a product that is not only guest aware, but application aware. This provides the ability to stop and restart applications when failures occur, instead of the entire VM. Working with VMwareHA, you can restart and recover virtual machines as well, if necessary. The net is you can run more business critical applications in a virtual environment, without having to worry about outages and downtime. Keep an eye out for more releases this year from Symantec with more products that enable businesses to protect their data, move to the cloud and reduce downtime.
Author: Jake Roczniak, Consultant
Last month EMC held its annual user conference, EMC World 2011, in Las Vegas. Each year the company chooses a theme which broadly defines their core focus for the event. This year’s theme was “Cloud Meets Big Data”. In his keynote, EMC CEO Joe Tucci exclaimed that EMC’s role was “… to lead customers on their journey to cloud computing and transforming IT.” The “Big Data” aspect EMC is referring to is the fact that the so-called digital universe will contain 35 zettabytes of information within the next decade. IDC is also expecting server images to grow by 10x in the next decade. So not only will servers continue to get more powerful but they will also multiply wildly. EMC introduced what it is calling “The EMC Big Data Stack” defining their view of how to store, manage, and act on the big data coming downstream. They are also aligning much of their product set to be efficient in their vision of a hybrid-cloud model. EMC made many announcements - some that I think will be the most interesting to keep an eye on include:
Greenplumb & Hadoop - a “big data” analytics hardware platform
Project Lightening – Flash based PCIe server side device for moving workloads around, to and from the storage array to the physical server itself, utilizing FAST
All Flash versions of the VNX and VMAX
Isilon 108NL – New hardware that can reach a 15 petabyte file system in a single volume
VPLEX Geo – Create a federated storage pool at synchronous distance
Atmos 2.0 – The second generation of EMC’s globally scalable storage system
Were you at EMC World this year? If so, what did you think?
Author: Kushal Patel, Senior Consultant
Really? Is everyone that surprised that a cloud provider had an outage? An Amazon EC2 service disruption is never timely, but anyone with a well-planned DR strategy should not have been affected. If you want to know what happened, you can read the Amazon post mortem here.
This begs the question: “Are users of cloud service providers neglecting to consider Disaster Recovery as part of their new cloud based architecture?”
Simple answer: “If they are, they shouldn’t…”
The main message here is, read the Cloud Providers’ SLA’s, compare them to your Recovery Time and Recovery Point Objectives and plan accordingly. The location(s) of application, compute, network and storage resources, whether in the cloud or on-premise, does not preclude an organization from planning for DR. This includes Infrastructure, Platform, AND Software as a Service.
Consult with a DR specialist to create a design that encompasses all of your critical resources and adheres to your businesses availability needs. Like I said, “You get what you plan for…”
For those of you who were affected by the outage, I truly am sorry for your inconvenience, but I thank you for the lesson.
Guest Blogger: Jason Diesel, Director, Systems Engineering, Varonis
Virtual servers and virtualized storage systems contain real data. This data needs to be managed and protected, just like the data sitting on physical servers—it needs to be accessible by the right people, its usage needs to be monitored, and the right people need to be involved to decide who gets access to it and what acceptable use is.
Organizations no longer have to manually manage permissions to ensure that only the correct users have access to the right data and that their permission can be revoked when they no longer need them. The previously impossible is now possible by leveraging metadata, which makes the protection of data on your virtualized storage as easy as vmware makes it to spin up a virtual host.
When it comes to identifying sensitive data and protecting access to it, a number of types of metadata are relevant: user and group information, permissions information, access activity, and sensitive content indicators. A key benefit to leveraging metadata for preventing data loss is that it can be used to focus and accelerate the data classification process. In many instances the ability to leverage metadata can speed up the process by up to 90%, providing a short list of where an organization's most sensitive data is, where it is most at risk, who has access to it and who shouldn't.
Key questions that can be answered with the intelligent use of metadata include, who owns this data? Who has access to this data? Who should have access to it? Who is using it? What data is no longer being used? Where is sensitive data over exposed, and how do I fix it? Software automation that uses this metadata can supply the answers to these questions, route them and make them available to the newly found data owners and IT so that the right people in the organization can make informed data governance decisions.
This post just touches the surface of this important issue, but you can learn more about how to leverage metadata technology at the January 20 VMUG Winter Warmer Event at Gillette Stadium in Foxborough. Daymark and Varonis will be discussing this topic at 2:00pm RED LEVEL, Room 20. Hope to see you there!
Author: Brenden Doyle, Senior Consultant
There are a few different ways to encrypt backup tapes on the market today using software solutions and hardware solutions. One thing they all have in common is that they all need a key management solution to manage the encryption keys.
Some key management solutions are considered “in band” solutions such as the KMS feature of NetBackup where the Master server can manage the keys for encryption-capable tape drives. Other key management solutions are considered “out of band” key management solutions such as Q-EKM and SKM from Quantum. Both of these out of band solutions use a specific key management appliance to supply encryption keys directly to the tape drives themselves. Each of these solutions are also proprietary to the drive type they support -- Q-EKM is used for IBM drives and the SKM is used for HP drives. This can be a bit confusing and needs to be considered when adding additional sites to an existing backup configuration. For instance, if you are s set up with IBM drives using QEKM for the key management, you are tied into the IBM drive technology if you want to swap tapes between the sites.
Another issue to be considered is NDMP backups as direct NDMP configurations pose a problem when using” in band” key management utilities. (Note: by “direct NDMP backups” I mean when a tape drive is directly connected to a filer). This poses an issue for the NetBackup Media Server Encryption Option. Since it uses a tape driver on the media server to do the encryption there is no way for it to encrypt a backup being written by the NDMP appliance. This also poses an issue for the KMS “in band” key management feature as it has no way to request a key from the Master server when the drive is directly attached to the filer. For an environment with many large filers, “out of band” key management utilities will allow you to keep the direct NDMP backup architecture in place with its high performance tape writes. An “in band” key management utility might require a swap to a remote NDMP architecture where the data will first travel over the network to a backup server before it gets written to tape. This will be a significant degradation in performance, and that won’t be acceptable to the end user.
To summarize, keep in mind the key management utility in use and match it when adding new tape drives or libraries to an existing configuration. Keep in mind that NDMP direct attached backups might need a different key management utility and that the best way to preserve the direct attached architecture is to use an “out of band” key management appliance.
Author: Kushal Patel, Senior Consultant
Cloud computing will have a significant impact on IT functions within the industry over the next several years. The challenge is having a strategy to steer clear of the pitfalls and leverage the opportunities that make sense for your business.
How do you, as the IT manager, sort through all the “cloud computing” clamor?
While cloud computing is still in its early stages, I have sifted through a lot of superfluous information that’s out there and can provide some basic, yet solid, advice.
Let’s start with understanding what cloud computing really is, and how you can begin building a framework for a cloud strategy.
There’s still a lot of confusion as to what cloud computing is – but maybe there shouldn’t be. For a midmarket company, cloud computing can simply be defined as a way to outsource some IT “headaches” to a third party (or even business unit) on a pay-as-you-use service model so you can focus on improving your core competencies. You can compare this to the power industry; before the power grid existed everyone generated and delivered their own power. Today we don’t need an investment in power-generating equipment – we simply pay for what we use and let the power companies deal with the “headache” of power generation.
How to begin establishing a strategy:
The basic questions to ask when setting a cloud strategy center around what core strengths you want to focus on. Which capabilities are you lacking and what headaches do you want to abandon? Don’t get too wrapped up in the specifics of Public Clouds, Private Clouds, IaaS, PaaS, SaaS, Elastic Computing, Chargeback, etc, etc, etc. Strategy is a projection into the future, so think about what technologies you want your organization to have a strong competence in, and what technologies are better suited for somebody else to deal with. Those functions that aren't core to your operation are good candidates for the cloud. For instance, data warehousing may be best if kept internal, whereas collaboration tools may be a good candidate for the cloud.
Security, security, security:
Everybody's paranoid about cloud security, and for good reason. Anytime you trust a third party, you need to consider the risks. Compliance concerns will always be tricky, and you should always check with your internal compliance watchdogs before deciding to leverage the cloud. But don't arbitrarily assume that your capacity for compliance is better than that of a third party. Just because you feel safer driving your own car doesn’t change the fact that you are safer when a professional is in control (pilot, taxi, captain, etc.).
So in summary, don’t let the term “cloud computing” cause discomfort. Begin with the basic idea of removing your headaches and focusing on critical applications. Be cautious but open to 3rd party security control and be open to the inevitable journey to the cloud.
Author: Kushal Patel, Senior Consultant
For the last 15 years port-blocking (stateful inspection) firewalls have been the cornerstone of network security. It’s no secret, however, that modern applications and threats easily circumvent the traditional network firewall. Attempts by security teams to bolt application awareness and control onto existing firewall products, or to consolidate “firewall helpers” with a Unified Threat Management (UTM) device have fallen short of the mark, or failed all together. Applications and threats are still making their way around these fragmented solutions, frustrating IT groups that have only managed to incur additional cost and complexity without fixing the problem.
The old model for network security was simple because everything was black and white. Business applications constituted good, low-risk traffic that should be allowed, while threats – and pretty much everything else – constituted bad traffic that should be stopped. The problems with this approach today are basically threefold:
- Applications have become increasingly gray – classifying types of applications as good or bad is not a straightforward exercise (i.e. Facebook, Gmail, Skype).
- Applications have become increasingly evasive (i.e. Instant Messengers, Proxy Avoidance).
- Applications have become the predominate target of today’s threat developers (i.e. SQL Injection, Cross Site Scripting).
To help mitigate these evolving risks, enterprises and vendors have tried to compensate for their firewall’s deficiencies by implementing a range of supplementary security solutions, often in the form of standalone appliances. A few common examples are intrusion prevention systems, antivirus gateways, web filtering products, and application-specific solutions – such as a dedicated platform for instant messaging security. The bottom line is that network security in most enterprises is fragmented and broken, exposing them to unwanted business risks and ever-rising costs. Traditional network security solutions have simply failed to keep pace with changes to applications, threats, users, and the network security landscape in general.
Enter Palo Alto Networks and Next Generation Firewalls
Next-generation firewalls are re-inventing network security. By focusing on Applications (App-ID®), Active Directory Users (User-ID®), and Content (Content-ID®) – not just ports and protocols – as the key elements to deliver visibility and control. Next-generation firewalls allow enterprises to safely enable modern applications, without taking on the unnecessary risks that accompany them, all the while delivering a substantial reduction in cost and complexity by eliminating the need for enterprises to deploy a wide variety of additional network security products.
Palo Alto Networks set out to restore the firewall as the cornerstone of enterprise network security infrastructure by “fixing the problem at its core.” Starting with a blank slate, its world-class engineering team took an application-centric approach to traffic classification in order to enable full visibility and control of all types of applications running on enterprise networks – new-age and legacy ones alike. The result of this effort is the Palo Alto Networks family of next-generation firewalls – the only solution that fully delivers on the essential functional requirements for a truly effective, modern firewall:
- The ability to identify applications regardless of port, protocol, evasive tactics or SSL encryption.
- The ability to provide extensive visibility of and granular, policy-based control over applications, including individual functions.
- The ability to accurately identify users and subsequently use identity information as an attribute for policy control.
- The ability to provide real-time protection against a wide array of threats, including those operating at the application layer.
- The ability to support multi-gigabit, in-line deployments with negligible performance degradation.
With the introduction of its family of next-generation firewalls, Palo Alto Networks began the process of re-inventing network security, of restoring effectiveness and simplifying security infrastructure. The result is a market-leading solution that allows CIOs to tackle a broad range of increasingly substantial challenges by:
- Enabling user-based visibility and control for all applications across all ports.
- Stopping malware and application vulnerability exploits in real time.
- Reducing the complexity of security infrastructure and its administration.
- Providing a high-speed solution capable of protecting modern applications without impacting their performance.
- Helping to prevent data leaks.
Considering matters from a business perspective, the Palo Alto Networks next-generation firewall also helps organizations:
- Better and more thoroughly manage risks and achieve compliance – by providing unmatched awareness and control over network traffic.
- Enable growth – by providing a means to securely take advantage of the latest generation of applications and new-age technologies.
- Reduce costs – by facilitating device consolidation, infrastructure simplification, and greater operational efficiency.
The net result is that Palo Alto Networks is providing today’s enterprises with precisely what they need to take back control of their networks, to stop making compromises when it comes to information security, to put an end to costly appliance sprawl, and to get back to the business of making money. By delivering unmatched visibility and control over applications and the threats that seek to exploit them, network security solutions from Palo Alto Networks are substantially raising the bar for effectiveness and efficiency while establishing a new foundation for enterprise security.
Author: Matt Trottier, Principal Consultant
EMC World 2010 stormed through Boston this year, where EMC made its case on why it should be the storage architecture driving your private cloud initiative. EMC released many new products and new feature sets to their existing product lines to further prove its case. During my time at EMC World, I had the chance to attend many of the technical breakout sessions. One I found particularly interesting was on the enhancements coming to the CX4 line in the upcoming months.
Thanks to the new 64 bit architecture of the CX4 line, EMC is able to bring some well needed feature updates to the Clariion via upcoming release of FLARE 30. FLARE 30 will expand on its storage pool technology, virtual provisioning, reintroduce it's FAST technology, give us our first common interface for CX, NAS and RecoverPoint SE, and find more useful ways to improve performance with Solid State Drives (SSDs).
Virtual Provisioning came out in FLARE 28 with EMC's initial attempt bring thin provisioning into the CX4 product line. In a nut shell, they allowed you to build storage pools from either FC or SATA disks to present Thin LUNs and overprovision storage. Under the covers, it was making a bunch of meta-luns spread across hidden RAID groups that made up the storage pool. What it did well was break the "brick and mortar" approach that had dominated EMC's way of thinking in terms of allocating and provisioning storage for years, and bring some sense of virtualization to the CLARiiON line. But it was severely limited since it could only support RAID 5 or RAID 6 and EMC did not recommend it for mixing high workloads across different host machines.
In FLARE 30, EMC has revamped virtual provisioning feature to become the new way to provision storage on the Clariion going forward. Here are some of the highlights:
- Pool size and drive count restrictions have been updated to support all drives in the CX4 minus the 5 FLARE drives. Essentially, this means you can build a 955 drive storage pool on a CX4-960 if you want.
- All drive types are now supported in the same storage pool including solid state disks (needed for FAST as explained below).
- RAID 10 can now be used for storage pools to allow for pools to accommodate higher write workloads.
- Thin provisioned LUNs will now be able to expanded or shrunk in a single step without having to build a meta-lun.
- When provisioning LUNs from a storage pool, LUNs can be created as "thick," meaning all space is reserved for the size of the LUN in a contiguous address space.
You will still be able to use traditional RAID groups if you want for specific use cases, but in order to get the most out of your CX4, virtual provisioning storage pools are the way to go. Why? Because it is a great way of spreading I/O workloads across as many spindles as possible to get the most bang for your storage buck. Other storage vendors have been doing this for years in one form or another (NetApp, HP EVA, 3PAR to name a few). It only seems natural for EMC to finally start moving to virtual storage pools as that is what the market is asking for.
The other good reason for using storage pools is EMC engineering is going to start building new features that take advantage of their storage pool technology.
One such feature is Fully Automated Storage Tiering, or FAST. This will build upon virtual storage pools and allow data to be automatically placed into the proper storage tier (or disk type) at the sub-LUN level. The CLARiiON will move the 1 GB chunks of the thin LUNs to the proper storage type in the pool as those chunks "heat up" or "cool down."
To show how this works, consider the following scenario: Say I have a CX4-240 with a storage pool consisting of (5) 72GB SSDs, (30) 450 GB FC drives and (20) 1 TB SATA drives. I then provision a 500 GB Thin LUN to a host for a SQL database from that pool. As the SQL servers uses that LUN, the hot chunks of the most used SQL tables will be moved to the SSDs for high performance while untouched portions of the database will be moved to slower disk in the pool, either the FC or SATA drives. Over the life of the LUN FAST will continually tune the LUN and move 1 GB chunks to the appropriate disk type based on its "temperature."
FLARE 30 is also introducing compression for Thin LUNs in a storage pool. Traditional RAID group LUNs will be migrated into storage pools in order to support compression. Data compression will be a background process. EMC did state that this is intended to be used "relatively inactive LUNs," such as archive volumes, backup copies or static data repositories.
The last real interesting feature coming out in FLARE 30 is FAST cache. In simple terms, EMC will SSD drives as an extension to SP cache to help with overall storage performance. This will provide a much larger, scalable cache that can be turned on/off on a per LUN basis. Unlike the PAM card that NetApp uses in its filers to speed up read cache, FAST Cache can be used as either read/write cache, via RAID 1 or 10, or read cache via RAID 0. Depending on the CX4 model, FAST Cache will scale up to 2 TB of extended cache on the CX4-960. FAST Cache will support both traditional RAID group LUNs and storage pool LUNs.
FAST Cache is going give EMC an excellent way of making the high price of SSDs more palatable for the mid-tier market. Rather than trying to find that small table of an Oracle database that requires 6000+ IOPS to move specifically to SSDs, FAST cache has the potential to have an immediate impact on any customer's environment that needs a performance boost to extend the life of their current CX4.
Last, but certainly not least, EMC will be introducing a new unified management framework called Unisphere. Unisphere will be able to manage CLARiiONs, Celerras, and RecoverPoint/SE from the same management interface. Unfortunately, only Celerras with the new DART 6.0 will be supported for management in Unisphere. On the other hand, CLARiiONs with FLARE 19 and above will be supported. From what I have seen, EMC has taken great steps to make Unisphere more intuitive and easier to use than Navisphere or Celerra Manager. Going forward, as the product matures EMC will introduce additional management capabilities into Unisphere to manage additional EMC products.
Author: Tim Donovan, President
On April 30, we received some great news: Daymark has been honored by the Boston Business Journal as one of the "Best Places to Work" for 2010! We are delighted, because this speaks volumes about who we are as a team, and as a company.
In order to be considered, the company needed to be nominated by an employee or an "outside supporter." Then, an independent company (Quantum Workplace) conducted and tabulated surveys of employees. In order to be considered, 85% of full-time employees needed to participate. No individual information from the survey was shared with Daymark, so employees were able to be completely candid, and no input from Daymark's management team was considered.
When the results were tabulated, Daymark was one of 20 companies to be named in the Small Company category. This really is validation that what we do here works, for our customers and our employees. Our team is comprised of local consultants and experts who will do what it takes for our customers. Our "whiteboard to keyboard" approach means that the person who designs the solution will continue to be actively involved with its implementation. This boosts customer satisfaction and allows our employees to take ownership of their projects. We trust them to deliver their best, and they do.
We'll continue to do all that we can ensure that Daymark meets the needs of its customers and its employees. We are far from perfect, but completely committed to getting it right - for our customers and employees....Nice job Team Daymark!
PS. If you are looking for a great place to work, drop us a note! we are always on the lookout for talented experts to add to our growing team.
Author: Sean Gilbride, Director of Professional Services Operations
As promised in my last post, here are a couple of additional articles related to cloud computing that contain some great food for thought. I'd also like to hear what your thoughts are on this subject.
Cyberattack on Google Said to Hit Password System
Ever since Google disclosed in January that Internet intruders had stolen information from its computers, the exact nature and extent of the theft has been a closely guarded company secret. But a person with direct knowledge of the investigation now says that the losses included one of Google's crown jewels, a password system that controls access by millions of users worldwide to almost all of the company's Web services, including e-mail and business applications.
The intruders do not appear to have stolen passwords of Gmail users, and the company quickly started making significant changes to the security of its networks after the intrusions. But the theft leaves open the possibility, however faint, that the intruders may find weaknesses that Google might not even be aware of, independent computer experts said.
These new details seem likely to increase the debate about the security and privacy of vast computing systems such as Google's that now centralize the personal information of millions of individuals and businesses. Because vast amounts of digital information are stored in a cluster of computers, popularly referred to as "cloud" computing, a single breach can lead to disastrous losses.
Spam Suspect Uses Google Docs; FBI Happy
FBI agents targeting alleged criminal spammers last year obtained a trove of incriminating documents from a suspect's Google Docs account, in what appears to be the first publicly acknowledged search warrant benefiting from a suspect's reliance on cloud computing.
The warrant, issued August 21 in the Western District of New York, targeted Levi Beers and Chris de Diego, the alleged operators of a firm called Pulse Marketing, which was suspected of launching a deceptive e-mail campaign touting a diet supplement called Acai Pure. The warrant demanded the e-mail and "all Google Apps content" belonging to the men, according to a summary in court records.
Google provided the files 10 days later. From Beers' account, the FBI got a spreadsheet titled "Pulse_weekly_Report Q-3 2008" that showed the firm spammed 3,082,097 e-mail addresses in a single five-hour spree. Another spreadsheet, "Yahoo_Hotmail_Gmail - IDs," listed 8,000 Yahoo webmail accounts the suspects allegedly created to push out their spam. The Yahoo accounts were established using false information, allegedly in violation of the CAN SPAM Act.
Privacy advocates have long warned that law enforcement agencies can access sensitive files stored on services like Google Docs with greater ease than files stored on a target's hard drive. In particular, the 1986 Stored Communications Act allows the government to access a customer's data whenever there are "reasonable grounds" to believe the information would be relevant in a criminal investigation - a much lower legal standard than the "probable cause" required for a search warrant.
Is your company moving toward, or considering, implementing a public cloud solution? I'd like to hear from you.