Skip to content

Why Backups Matter: Protecting Data, Systems, and Business Continuity

Backups can be divided into two main categories: data backups and system backups.
. Data backups focus solely on backing up your data—documents, files, databases, etc.
. System backups ensure your servers and clients are bootable, that the operating systems are healthy and up to date, and can be quickly restored in case of failure.

System backups are essential because rebuilding servers can be time-consuming. Servers often run critical application software and services that take time to install and configure. Additionally, most operating systems require patches and security updates, further extending recovery time.

Client PCs can also be complex to rebuild. They may need graphics card drivers, firmware updates, system frameworks, antivirus definitions, and various service updates. Operating systems like Windows and macOS include lengthy patching processes that slow things down even more. That’s why backing up both servers and clients is vital—rebuilding takes time.

The Importance of Data Backups
Since data is always changing, frequent data backups are crucial. The more frequently data changes, the more often it should be backed up. Modern operating systems and file systems offer file history and version control to help manage these changes. However, due to the sheer number of file changes occurring daily, it’s nearly impossible to track them manually. Therefore, a realistic and consistent backup routine is essential to ensure that nothing is lost.

Redundancy ≠ Backup
Many modern applications offer redundancy through replication. This includes replicated servers or applications running on separate virtual machines. In the event of a failure, you can quickly switch to the replica and continue working.

However, replication is not a true backup. Replicas reflect the state of the system at a specific point in time. If a failure or data loss occurs between replication intervals, that data may still be lost. Replication should be seen as a supplement to backups, not a replacement.

Backups are especially important because you often don’t know when data was lost. If a file was deleted or corrupted days ago, replication may have already overwritten the good copy.

Hardware Redundancy
Most modern servers include hardware redundancy like mirrored drives (RAID) and dual power supplies to reduce the risk of hardware failure. While important, this type of redundancy is not a substitute for backups. It helps ensure uptime but doesn’t protect against data corruption, accidental deletion, or ransomware.

Securing Backup Infrastructure

A good backup target is Network Attached Storage (NAS), as it actively manages the backup data. In a secure setup, the backup system pulls data from the source but keeps the backup data isolated from the original server. This separation improves security and reduces the risk of backup data being compromised—especially by ransomware.

Best practices include:
. Using separate user credentials for backup and data servers
. Ensuring data servers do not have access to the backup systems
. Running backup agents on the backup server—not the data server

This architecture minimizes vulnerabilities and maximizes protection.

Backup Frequency and Strategies
You should back up frequently—at least once daily for data that changes often. As data volume grows, increase the frequency with incremental backups throughout the day.

Follow the 3-2-1 rule:
. Keep three copies of your data
. Store them on two different local devices
. Maintain one offsite backup in a separate physical location

Archive backups on a monthly, quarterly, and yearly basis. Use audit logging to track file changes so that lost data can be traced and recovered more easily.

Built-in OS Backup Features
Modern operating systems offer built-in backup tools:
. Windows 10 and 11 use File History to create local file versions.
. macOS includes Time Machine, which makes periodic, restorable file snapshots.
Always store these backups on a separate hard drive to avoid single points of failure.

Choosing the Right Backup Solution
There are many excellent backup solutions available, including hybrid systems that combine OS-level tools, NAS, and cloud storage. Some vendors offer complete software and hardware bundles. Look for:
. Support for both data and system backups
. Agent-based architecture
. Strong encryption and access control

Veeam is a well-known vendor that provides a comprehensive suite for backing up physical and virtual systems. They offer free agents with server licenses, making it a cost-effective option for many setups.

Running DeepSeek on Ollama: A Private, Local AI Setup That Works

Over the past few weeks, I’ve been experimenting with running DeepSeek on Ollama, and I’m genuinely impressed. The ability to run AI models locally—without sending data over the Internet—means my chats remain private and confidential. For anyone concerned with data security, this is a huge win.

Hardware and Setup
The hardware requirements aren’t outrageous. I had a 24GB NVIDIA GPU left over from a previous project. I had bought it on Facebook Marketplace about 18 months ago, and though it had been sitting unused for over a year, it’s still a solid piece of hardware.

Setting up Ubuntu was surprisingly straightforward. I chose not to virtualize the machine or its operating system, which made the setup process a lot smoother. Installing the necessary drivers to get the GPU working was easy. While virtualization is an option, GPU passthrough can be tricky, so I’ve saved that for a potential future project.

Comparing with Mac Studio
Before this, I ran LLaMA 3 on a 32GB M1 Mac Studio, which handled most open-source models without breaking a sweat. Naturally, I expected a Linux setup with more cores and a dedicated GPU to be significantly faster. But in practice, it wasn’t. The Mac Studio held its own and, in many ways, performed just as well.

Why Local AI Matters
I truly believe this is one of the simplest and most effective ways to bring private, secure AI capabilities to any organization. There’s a wide range of open-source large language models (LLMs) available, each with its own strengths. In reality, most people don’t need the full power of GPT-4 for everyday tasks.

Going the open-source route is not only free but also gives me more control—especially when fine-tuning models to fit my needs. Running LLMs on in-house hardware also lets multiple users share the GPU, which helps distribute the cost efficiently.

My Use Case: Code Review
I primarily use DeepSeek to check my code. Since I work on a lot of collaborative projects, it’s great to have an AI assistant that can review my work instantly—no need to wait for a colleague to be available. This has been a huge productivity boost.

I'm also planning to test other open-source LLMs to evaluate their strengths in areas like coding, writing, summarization, and more.

Cost Savings for Organizations

A typical subscription for a professional-grade AI service costs around $20 per month. For a small organization with 20 people, that adds up quickly—several hundred dollars each month, and a few thousand dollars per year.

Some of my clients, especially biotech companies, have proprietary IP they can’t risk uploading to a cloud-based AI provider. Others are consultants who don’t have the legal rights to send confidential data to third parties. For these use cases, setting up a private AI server is a perfect solution.

Final Thoughts
Local, private AI is no longer just a dream—it’s here and working, even on consumer hardware. With open-source LLMs evolving rapidly, I think we’ll see more and more organizations making the switch to in-house AI tools.

If you’re thinking about it, now’s a great time to dive in.