Meeting with clients, I’m hearing a lot of folks complaining about two common problems: First - Backups are taking far too long to complete. Second – Recovering a single file is a long, painstaking process.Read More
Information Technology Navigator - Tips, Advice & Insights from Technology Pros
How do you know if your NetBackup Domain has become too large?
When NetBackup Domains grow beyond Symantec best practices guidelines, performance and manageability can begin to degrade. During my 15 years as a NetBackup backup engineer, I’ve found that the following 5 criteria are worth considering:
Catalog size - As the catalog grows, the time required to protect it can become excessive. The catalog backup is a database backup. The synchronization checkpoints and locking required for other NetBackup jobs will cause extra processing time.
Number of media servers – As the number of Media Servers grows beyond recommended values, overall performance can be impacted.
Number of EMM devices - (Media Servers, Tape Drives, Robots, Disk Pools, Storage Units, SAN Clients, Fibre Transport Devices) - The number of media servers and EMM devices has a direct relationship on the efficiency of the NetBackup environment. When the number of EMM devices and media servers grow beyond acceptable levels, NetBackup resource allocation degrades
Daily number of jobs processed - The number of jobs will eventually affect the overall performance of the system.
Number of processors (Sockets) – The physical processor count is critical in job processing, for large NetBackup environments.
Firmware Management – The Evil Mistress
Firmware management is not traditionally a topic that anyone gets excited about. In fact, it’s typically relegated to the bottom of the IT team’s list of priorities because it’s often painful and time consuming.
From a management perspective, budget spent on firmware management is very difficult to justify and ROI is nearly impossible to measure. But firmware is an evil mistress - if it’s not attended to often enough - out of date firmware can stall an OS upgrade, prevent the introduction of new hardware or worse, cause an unexpected outage of critical equipment in the data center.
We all have stories of servers, switches, storage arrays, etc. that ran fine for years until one day, the system crashed for no reason. We later discovered that there was a critical bug fix quietly released two revisions of firmware prior to the one we were running. Grrrr!
The addition of Storage Lifecycle Policies (SLP’s) in Symantec NetBackup 7.6 has provided backup administrators with a very effective tool for managing backup/snapshot, image, duplication, and replication.
While SLP’s simplify these tasks, careful consideration is required when you’re monitoring and managing the policies and the components utilized in your NetBackup environment.
The SLP administration window we all know and love, shown below, is a great tool for managing SLP’s. But there are other things you should consider to help manage your SLP’s.
To help manage your SLP’s and make sure they’re optimized, please consider these (8) additional configuration tips.
It happens all the time – a customer has a problem that’s tough to solve. I was recently asked how to best protect a virtual environment – stretched between two data centers. The customer wanted all administration through vCenter with instant recovery and the ability to meet stringent RPOs and RTOs. And the catch – they wanted the backed up data to be stored on a different vendor’s array utilizing a different protocol - NFS instead of Fibre Channel (FC) block storage.Read More
Is that light you see at the end of the proverbial tunnel, or is it the headlight of an oncoming train?
The future of your professional career just might depend on your ability to successfully lead your company to the cloud. So many things to consider…
Yes, the cloud offers many advantages to your business including agility and high levels of fault tolerance, but in and of itself, the cloud does not release you from backing up your data in order to protect yourself from user error, data corruption, or data loss.
Just like provisioning new applications or scaling current applications, you need to consider every angle when protecting your data that lives in cloud.
Imagine you're sitting in the studio audience, dressed in your best Game of Thrones or Batman costume. Monty Hall walks right up next to you and says, "I've got a deal for you!" Can you picture it?
As he does on every show, Monty points to two doors. They look identical. You can't see what's behind the doors, but the payoff is potentially great. He says, "Your transition to the cloud sits behind either door #1 or door #2. Which door do you choose?
Historically, administrators faced a challenge. Either they had to have a small footprint with View Linked Clones AND needed to re-provision applications (sometimes manually) at the end of all recompose operations. Or, they had to give everyone thick persistent desktops, which ate up storage costs and thereby decreased the ROI of VDI significantly. Enter CloudVolumes.