Block download based on file SIZE …

I forgot the guy’s name who is from FB, who asked me a question, lets name him Mr. X. The query was > howto block downloads of large files (lets say 5mb or above ) in specific timings (like 8pm-12am).

But the issue is How the router will know the file size before it’s downloaded? The router has no way of knowing how big a connection is… An workaround is to create a Firewall Filter rule that will will allow the first 5MB of a connection through, and once it reaches that, it will start to drop packets. I used it a network in Gulistan-e-Jauhar, and it worked good.
But do remember that that It will also affect streaming, RDP like protocols, VPNs, and any other connection that transfers a large number of bytes.

/ip firewall filter
add action=drop chain=forward comment="downloading of files larger then 5mb (It will break connection after 5mb of transfer) applicable from 8pm till 12am / zaib" connection-bytes=5242880-0 disabled=no protocol=tcp time=\


source :


Mikrotik How to block Winbox Discovery + Limit Winbox Access

To hide your mikrotik from being appearing in WINBOX scan negibour list, & to limit WINBOX access from your admin PC only,
Use the Following.

/tool mac-server
add disabled=yes interface=all
/tool mac-server ping
set enabled=no
/ip firewall filter
add action=drop chain=input comment="block mikrotik discovery" disabled=no dst-port=5678 protocol=udp
add action=drop chain=input comment="ALL WINBOX REQUEST By MAC Address" disabled=no dst-port=20561 protocol=udp
add action=drop chain=input comment="ALL WINBOX REQUEST EXCEPT FROM MY PC" disabled=no dst-port=8291 protocol=tcp src-address=!

You can Also Disable Network Neighbor Discovery on the interface to which your network users are connected

Example: /ip neighbor discovery set ether1 discover=no

source :


/ip firewall mangle
add action=add-src-to-address-list address-list=Worm-Infected-p445 address-list-timeout=1h chain=prerouting connection-state=new disabled=no dst-port=445 limit=5,10 protocol=tcp
/ip firewall filter
add action=drop chain=forward disabled=no dst-port=445 protocol=tcp src-address-list=Worm-Infected-p445
add action=drop chain=forward disabled=no dst-port=445 protocol=tcp src-address-list=Worm-Infected-p445
source :

Everything You Need to Know About SLC, MLC, & TLC NAND Flash

The Anatomy of an SSD

MyDigitalSSD BP4e mSATA SSD 

MyDigitalSSD BP4e mSATA SSD with two enclosed NAND flash memory chips installed. The controller chip is designed by PHISON

  • A. NAND Flash: The part where your data is stored, in blocks of non-volatile (does not require power to maintain data) memory.
  • B. DDR Memory: Small amount of volatile memory (requires power to maintain data) used to cache information for future access. Not available on all SSDs.
  • C. Controller: Acts as the main connector between the NAND flash and your computer. The controller also contains the firmware that helps manage your SSD.

What is NAND Flash?

NAND flash memory is built up of many cells that holds bits, and those bits are either turned on or off through an electric charge. How those on/off cells are organized represents the data stored on the SSD. The number of bits in those cells also determine the naming of the flash, for example Single Level Cell (SLC) flash contains a single bit in each cell.

The reason behind SLC only being available at lower capacities is down to the physical real estate the NAND flash occupies on the Printed Circuit Board (PCB). Don’t forget that the circuit board has to have the controller, DDR memory, and flash built to standard dimensions to fit inside your computer. MLC doubles the amount of bits per cell, whereas TLC triples, and this opens up for higher capacity SSDs.

There are particular reasons why manufactures build flash memory with a single bit per cell like SLC. SLC has the advantage of being the fastest, most durable but has the cons of being more expensive, and is not available in higher gigabyte storage capacity. That is why SLC is preferred for heavy enterprise usage.

MLC and TLC flash in comparison to SLC, is cheaper to produce, available in higher storage capacities, but at the tradeoff of relatively shorter life spans and slower read/write speeds. MLC and TLC are preferred for everyday consumer computer usage.

Understanding your own needs for computing and NAND flash basics will not only help you pick the right SSD, but will also help you figure out factors such as the price behind the product.

SLC (Single Level Cell)

The Single Level Cell flash is so called for it’s single bit that can either be on or off when charged. This type of flash has the advantage of being the most accurate when reading and writing data, and also has the benefit of lasting the longest data read and write cycles. Program read/write life cycle is expected to be between 90,000 and 100,000. This type of flash has done exceptionally well in the enterprise market because of it’s life span, accuracy and overall performance. You won’t see too many home computers with this type of NAND due to its high cost and low storage capacities.


  • Has the longest lifespan and charge cycles over any other type of flash.
  • More reliable smaller room for read/write error.
  • Can operate in a broader temperature range.


  • The most expensive type of NAND flash on the market.
  • Often only available in smaller capacities.

Recommended for:

  • Industrial use and workloads that require heavy read/write cycles such as servers.

eMLC (Enterprise Multi Level Cell)

eMLC is MLC flash, but optimized for the enterprise sector and has better performance and lastability. Read/write data life cycles are expected between 20,000 and 30,000. eMLC provides a lower cost alternative to SLC, yet maintains some of the pros of SLC.


  • Cheaper alternative than SLC for an enterprise SSD.
  • Has better performance and endurance over standard MLC.


  • Does not match SLC NAND flash SSDs in performance.

Recommended for:

  • Industrial use and workloads that require heavy read/write cycles such as servers.

MLC (Multi Level Cell)

MLC flash as it’s name suggests stores multi bits of data on one cell. The big advantage of this is the lower cost of manufacturing versus manufacturing SLC flash. The lower cost in flash production is generally passed onto you as the consumer, and for that reason is very popular among many brands. MLC flash is preferred for consumer SSDs for it’s lower costs but the data read/write life is less in comparison to SLC at around 10,000 per cell.


  • Lower production costs are passed onto you the consumer.
  • Is more reliable than TLC flash.


  • Not as durable and reliable as SLC or enterprise SSDs.

Recommended for:

  • Everyday consumer use, gamers, and enthusiasts.

TLC (Triple Level Cell)

Storing 3 bits of data per cell, TLC flash is the cheapest form of flash to manufacture. The biggest disadvantage to this type of flash is that it is only suitable for consumer usage, and would not be able to meet the standards for industrial use. Read/write life cycles are considerably shorter at 3,000 to 5,000 cycles per cell.


  • Cheapest to manufacture which in turn leads to cheaper to market SSDs.


  • Cells will survive considerably less read/write cycles compared to MLC NAND. This means that TLC flash is good for consumer use only.

Recommended for:

  • Everyday consumer use, web/email machines, netbooks, and tablets.

The SSD Life Cycle

Like all good things, an SSD does not last forever. As noted above, a solid state drive’s life cycle can be directly attributed to the NAND flash it comes with. SLC flash, for example, will last longer than MLC or TLC flash but that comes at a hefty price tag.

With MLC and TLC flash commonly used/found in consumer SSDs, the real question is how long will they last? has tested several available consumer-grade SSDs, most of which were MLC NAND with one being TLC NAND, and the results are promising. All of the devices tested lasted at least 700 terabytes (TB) of writes before failing, and a couple even pushed passed a petabyte (PB).

This is a lot of data, but let’s put that into perspective in writing 1 PB to an SSD.

1 petabyte (PB) = 1,000 terabytes (TB) / 1,000,000 gigabytes (GB) / 1,000,000,000 (MB)

That 1 PB could net you:

  • 222,222 movie DVDs at 4.5GB a DVD
  • 333,333,333 mp3 songs at 3MB a song
  • 500,000,000 jpg photos at 2MB an image
  • 15,384 installs of the game Grand Theft Auto V at 65GB an install

Looking at those numbers should really put to rest any doubts about your SSD failing in any short amount of time.

If you are considering an MLC or TLC SSD for everyday consumer use like; storing music, photos, software, personal documents or play games then you should feel assured that your SSD should last several years. This kind of usage is considered light compared to the ongoing heavy read/write usage of enterprise servers and computers as outlined in the next section below.

Note: For anyone worried about the lifespan of their SSD, features such as Self-Monitoring Analysis and Reporting Technology, or S.M.A.R.T. for short, can help you better keep track of your SSD’s longevity.

Enterprise vs. Consumer SSDs

Enterprise SSDs are commonly found in database servers.The difference and demands expected of enterprise SSDs set them a world a part from consumer SSDs. Enterprise SSDs are designed to meet a higher standard, and consistently perform in high-tech services, military, science and any area that would require a large amount of reading and writing data.

Database servers are an example of where you might see an enterprise SSDs, these servers are on 24/7 and that includes: longer read/write life cycle, faster read/write speeds, increased reliability and durability in harsh environments.

Consumer SSDs are less expensive, and are stripped down versions of enterprise SSDs. This may sound like you are missing out on certain features, but the benefits of a cheaper product with larger storage capacity are worth it. Besides manufactures are always increasing the performance of SSDs while bringing down the price.

In Conclusion

At this point, you probably have a good idea on the difference between SLC, MLC, and TLC NAND flash. The basics we discussed here, with insight into why some cost more than others, should clear up any confusion as to what type of flash best fits your needs.

Flash Type


Single Level Cell


Multi-Level Cell


Multi-Level Cell


Triple-Level Cell

Read/Write Cycles 90,000-100,000 20,000-30,000 8,000-10,000 3,000-5,000
Bit Per Cell 1 2 2 3
Write Speed ★★★★★ ★★★★☆ ★★★☆☆ ★★☆☆☆
Endurance ★★★★★ ★★★★☆ ★★★☆☆ ★★☆☆☆
Cost ★★★★★ ★★★★☆ ★★★☆☆ ★★☆☆☆
Usage Industrial/Enterprise Industrial/Enterprise Consumer/Gaming Consumer

The important thing to take away from this guide is that modern SSDs are built to last a considerable amount of time. While their life-cycle should be taken into account, it should by no means prevent you from buying faster and more efficient storage.


Comments on Everything You Need to Know About SLC, MLC, & TLC NAND Flash Friday, January 6, 2017 08:37:59

We have not encountered an eSLC as of yet. SLC flash on its own is a high-grade flash, and low capacity aside is more than suitable for enterprise applications

Frank Thursday, January 5, 2017 13:42:01
How to buy eslc. canot find
Bill Friday, December 30, 2016 08:33:16
A very good explanation written in such a way that a typical computer user can understand the basics without their eyes glazing over. There are several typos which are distracting however.
Midhun Lohidakshan Wednesday, December 7, 2016 02:33:55
Very useful information.Thanks!
Prasad Pattadakal Monday, November 28, 2016 01:12:24
Helpful Article. Easy to understand
Ben Sunday, October 2, 2016 13:24:51
I wonder if it is possible, to overcome the electron lingering problem with TLC, by periodically scheduling,during data wear leveling, to leave each entire block full of zeros for a time, before again re-using it? This would be a form of Tender Loving Care 😉
Sylvain Saturday, October 1, 2016 05:46:05
Thanks for the clean explanation 🙂
Rathlo Friday, September 30, 2016 04:23:52
Just FYI: The conversions from petabytes, terabytes and giogabytes should be 1024 based: 1PB = 1024 TB, 1 TB = 1024 GB, 1GB = 1024 MB. Wednesday, August 24, 2016 09:59:44

SLC, MLC, and TLC are all considered NAND flash. The difference between the SLC, MLC, and TLC is in their construction and physical design. For this reason, no firmware can change one type of NAND flash to the other.

Firmware updates can improve reliability and performance, so it is always best to consider updates offered from your SSD manufacturer’s website.

Evelyn Wednesday, August 24, 2016 05:11:32
Is the NAND flash used for SLC, MLC and TLC the same? Or the TLC can be converted to SLC thru firmware? Friday, August 5, 2016 09:52:49
Eric Hoyer,

Thank you, glad it helped. /
Vielen Dank, froh, es half.

Eric Hoyer Friday, August 5, 2016 04:37:02
ist ein guter Beitrag, so kann man erkennen was die leisten und was zu kaufen ist.

Mit freundlichem Gruß
Eric Hoyer


(Google Translate – Ger – Eng)

is a good contribution , so you can see what the make and what to buy .

Eric Hoyer Friday, March 18, 2016 16:42:23

Thank you for your comment, and your suggestion. SSD lifecycle is something that needs to be understood so that you can continue to operate and secure your data. If you do not have software that monitors your SSD’s status you can always check out Crystal Disk Info. It is a free to use utility found here:

Going into further detail about the information from Crystal Disk Info with the information provided by an SSD manufacturer would prove for an interesting article.

Ankush Thursday, March 17, 2016 09:44:47
Great article and really helpful. I have already purchased a TLC variant of SSD and was wondering how much time before I would have to part away from my $100. That being said, I never backed up my HDDs till now. I just copied off all the data from my old computers onto the new ones. A comparison of HDD and SSD lives will be appreciated, if you ever get time.

Thanks again. Wednesday, August 12, 2015 12:50:05

This article serves as an introduction to NAND flash and ignores other components and factors that can affect read/write speed. We are comparing the different flash types against each other, and TLC flash is considered slower in speed vs MLC flash.

The speed difference from a consumer stand point will be minimal or not noticeable

Simon Tuesday, August 11, 2015 20:42:15
There is some incorrect information here.
For example, MLC in itself does not guarantee faster write speeds.

Source :

[Linux] Cara Mengganti Repository Ubuntu 16.04 LTS dari Server Luar ke Server Lokal

Penjelasan Apa itu Repository?

Saya akan menjelaskan terlebih dulu apa itu repository atau yang biasa dipanggil repo.

Repository adalah tempat disimpannya berbagai aplikasi atau program yang telah dibuat sedemikian rupa sesuai dengan kebutuhan para pengguna linux.

Repository dapat diakses melalui internet, selain dari internet repository juga tersedia dalam bentuk DVD sebagai alternatif jika tidak ada koneksi internet.

Kenapa atau Apa Keuntungan Merubah Repository Ubuntu Server Luar ke Server Lokal?

Kenapa harus mengganti repository ubuntu ke server lokal? atau Apasih keuntungan merubah repository ubuntu ke server lokal?

Jawabannya adalah agar saat proses instalasi program atau aplikasi bisa lebih cepat, karena menggunakan server lokal yang ada didalam negeri bukan server yang ada di luar negeri.

Mengecek Versi Linux Ubuntu yang Digunakan

Sebelum mengganti repository linux ubuntu yang kita gunakan, pastikan dulu versi linux ubuntu-nya. Untuk mengecek versi linux ubuntu, jalankan perintah dibawah ini pada terminal/shell:

$ lsb_release -a

Cara Mengganti Repository Ubuntu

Ada 2 langkah untuk mengganti atau merubah repository ubuntu.

1. Edit File Konfigurasi Repository

Nama file konfigurasi repository ubuntu adalah source.list, lokasi file ini terletak pada direktori /etc/apt/. Untuk mengedit file tersebut kita akan menggunakan editor text nano.

Sebelum melakukan perubahan ada baiknya kita backup dulu file originalnya dengan menjalankan perintah berikut ini:

# cp /etc/apt/source.list /etc/apt/source.list.original

Setelah melakukan proses backup, dilanjutkan dengan mengedit file source.list. Disini kita akan menggunakan editor text nano.

# nano /etc/apt/source.list

Setelah file dibuka, hapus semua isi file tersebut lalu ganti dengan repositoy lokal ubuntu 16. Anda dapat melihatnya di dalam artikel ini.

Contoh kita menggunakan repository lokal ubuntu 16.04 via Kambing UI maka isi file source.list akan terlihat seperti ini:

deb xenial main restricted universe multiverse
deb xenial-updates main restricted universe multivers
deb xenial-security main restricted universe multivers
deb xenial-backports main restricted universe multivers
deb xenial-proposed main restricted universe multiverse

Setelah itu Save dengan menekan tombol Ctrl+O, lalu Exit dengan menekan tombol Ctrl+X.

Untuk mengganti atau merubah isi file source.list kita perlu masuk ke dalam mode superuser atau root.

2. Update Repository Ubuntu

Proses update diperlukan setelah merubah repository, agar linux ubuntu yang kita gunakan langsung mengenali server mana yang akan dipakai nantinya untuk proses update maupun install aplikasi atau program.

Berikut ini perintah update-nya:

# apt-get update

Selesai. cheers 😀

Source :

[Linux] Apa itu Swappiness ?

Pada saat menginstall linux, pasti ada notice untuk pemakaian swap space. swap space digunakan jika penggunaan memori fisik ( RAM ) telah penuh yang digunakan oleh proses yang sedang berjalan. Dan biasanya perhitungan untuk ukuran partisi swap space itu 2 kali lipat dari ram. Contoh : Laptop anda memiliki 1 GB ram, maka swap space anda minimal 2 GB ( ini perhitungan naif, posting selanjutnya akan dibahas bagaimana cara akurat untuk menentukan ukuran swap space ).

Apa itu swappiness ?

pemakaian swap space oleh sistem juga ada perhitungannya, tidak asal RAM penuh lalu langsung memakai swap soace. Ada istilah swappiness. swappiness adalah parameter pada kernel linux untuk menentukan kapan swap space itu dipakai. swappiness ini juga salah satu faktor penentu performa suatu OS Linux. paramater swappiness default nya 60. dan bisa di ubah dari 10 sampai 100.

Bagaimana mengubah nya ?

untuk mengubah parameter swappiness ada 2 cara, yang pertama dengan mengubah di /proc/sys/vm/swappiness. jika belum diubah isi dari file tersebut “60”. tinggal edit menggunakan nano atau vi atau yang lain editor favorit anda. cara kedua menggunakan perintah langsung, “sudo sysctl vm.swappiness=10” berarti anda mengubah parameter swappiness menjadi “10”. nah bagaimana cara perhitungan swappiness nya ? tadi kita mengubah nya jadi 10. maka jika penggunaan RAM mencapai 90%, swap space akan dipakai oleh sistem. nah jadi default nya kan 60, maka jika penggunaan RAM mencapai 40%, swap space akan dipakai.

Wajib kah pemakain swap space ?

inget, swap space menggunakan space harddisk. sedangkan latency harddisk itu lebih buruk daripada RAM. Jadi performa akan menurun jika proses pindah ke swap space. maka kalau anda mempunyai RAM yang cukup lumayan besar, maka parameter swappiness “10” aman untuk digunakan. hindari lah penggunaan swap space sekecil mungkin demi performa proses anda.

Source :

What you should know about Volume Shadow Copy/System Restore in Windows 7 & Vista (FAQ)


Volume Shadow Copy is a service that creates and maintains snapshots (“shadow copies”) of disk volumes in Windows 7 and Vista. It is the back-end of the System Restore feature, which enables you to restore your system files to a previous state in case of a system failure (e.g. after a failed driver or software installation).


No. Volume Shadow Copy maintains snapshots of entire volumes. By default, it is turned on for your system volume (C:) and protects all the data on that volume, including all the system files, program files, user settings, documents, etc.


In Windows XP, System Restore does not use the Volume Shadow Copy service. Instead, it uses a much simpler mechanism: the moment a program attempts to overwrite a system file, Windows XP makes a copy of it and saves it in a separate folder. In Windows XP, System Restore does not affect your documents – it only protects files with certain extensions (such as DLL or EXE), the registry, and a few other things (details). It specifically excludes all files in the user profile and the My Documents folder (regardless of file extension).


Volume shadow copies (restore points) are created before the installation of device drivers, system components (e.g. DirectX), Windows updates, and some applications.

In addition, Windows automatically creates restore points at hard-to-predict intervals. The first thing to understand here is that the System Restore task on Vista and 7 will only execute if your computer is idle for at least 10 minutes and is running on AC power. Since the definition of “idle” is “0% CPU usage and 0% disk input for 90% of the last 15 minutes, plus no keyboard/mouse activity” (source), it could take days for your machine to be idle, especially if you have a lot of programs running in the background.

As you see, the frequency with which automatic restore points are created is hard to estimate, but if you use your machine every day on AC power and nothing prevents it from entering an idle state, you can expect automatic restore points to be created every 1-2 days on Windows Vista and every 7-8 days on Windows 7. Of course, the actual frequency will be higher if you count in the restore points created manually by you and those created before software installations.

Here’s a more precise description: By default, the System Restore task is scheduled to run every time you start your computer and every day at midnight, as long as your computer is idle and on AC power. The task will wait for the right conditions for up to 23 hours. These rules are specified in Scheduled Tasks and can be changed by the user. If the task is executed successfully, Windows will create a restore point, but only if enough time has passed since the last restore point (automatic or not) was created. On Windows Vista the minimum interval is 24 hours; on Windows 7 it is 7 days. As far as I know, this interval cannot be changed.


  • If your system malfunctions after installing a new video card driver or firewall software, you can launch System Restore and roll back to a working system state from before the installation. If you can’t get your system to boot, you can also do this from the Windows Setup DVD. This process is reversible, i.e. your current state will be automatically saved as a restore point, to which you can later go back. (Note: System Restore will not roll back your documents and settings, just the system files.)
  • previous_versionsIf you accidentally delete 10 pages of your dissertation, you can right-click the document, choose Restore previous versions, and access a previous version of it. You can open it (in read-only mode) or copy it to a new location.
  • If you accidentally delete a file or folder, you can right-click the containing folder, choose Restore previous versions, and open the folder as it appeared at the time a shadow copy was made (see screenshot below). All the files and folders that you deleted will be there!


Note: While the Volume Shadow Copy service and System Restore are included in all versions of Windows Vista, the Previous versions user interface is only available in Vista Business, Enterprise and Ultimate. On other Vista versions, the previous versions of your files are still there; you just cannot access them easily. The Previous versions UI is available in all versions of Windows 7. It is not available in any version of Windows 8.


No. A versioning system lets you access all versions of a document; every time you save a document, a new version is created. Volume Shadow Copy only allows you to go back to the moment when a restore point was made, which could be several days ago. So if you do screw up your dissertation, you might have to roll back to a very old version.


No, for the following reasons:

  • Shadow copies are not true snapshots. When you create a restore point, you’re not making a new copy of the drive in question — you’re just telling Windows: start tracking the changes to this drive; if something changes, back up the original version so I can go back to it. Unchanged data will not be backed up. If the data on your drive gets changed (corrupted) for some low-level reason like a hardware error, VSC will not know that these changes happened and will not back up your data. (see below for a more detailed description of how VSC works)
  • The shadow copies are stored on the same volume as the original data, so when that volume dies, you lose everything.
  • With the default settings, there is no guarantee that shadow copies will be created regularly. In particular, Windows 7 will only create an automatic restore point if the most recent restore point is more than 7 days old. On Windows Vista, the minimum interval is 24 hours, but remember that the System Restore task will only run if your computer is on AC power and idle for at least 10 minutes, so it could take days before the conditions are right, especially if you run a lot of background processes or do not use your computer frequently.
  • There is no guarantee that a suitable shadow copy will be there when you need it. Windows deletes old shadow copies without a warning as soon as it runs out of shadow storage. With a lot of disk activity, it may even run out of space for a single shadow copy. In that case, you will wind up with no shadow copies at all; and again, there will be no message to warn you about it.


By default, the maximum amount of storage available for shadow copies is 5% (on Windows 7) or 15% (on Vista), though only some of this space may be actually allocated at a given moment.

You can change the maximum amount of space available for shadow copies in Control Panel | System | System protection | Configure.


It’s quite efficient. The 5% of disk space that it gets by default is usually enough to store several snapshots of the disk in question. How is this possible?

The first thing to understand is that volume shadow copies are not true snapshots. When a restore point is created, Volume Shadow Copy does not create a full image of the volume. If it did, it would be impossible to store several shadow copies of a volume using only 5% of that volume’s capacity.

Here’s what really happens when a restore point is created: VSC starts tracking the changes made to all the blocks on the volume. Whenever anyone writes data to a block, VSC makes a copy of that block and saves it on a hidden volume. So blocks are “backed up” only when they are about to get overwritten. The benefit of this approach is that no backup space is wasted on blocks that haven’t changed at all since the last restore point was created.

Notice that VSC operates on the block level, that is below the file system level. It sees the disk as a long series of blocks. (Still, it has some awareness of files, as you can tell it to exclude certain files and folders.)

The second important fact is that shadow copies are incremental. Suppose it’s Wednesday and your system has two shadow copies, created on Monday and Tuesday. Now, when you overwrite a block, a backup copy of the block is saved in the Tuesday shadow copy, but not in the Monday shadow copy. The Monday copy only contains the differences between Monday and Tuesday. More recent changes are only tracked in the Tuesday copy.

In other words, if we were to roll back an entire volume to Monday, we would take the volume as it is now, “undo” the changes made since Tuesday (using the blocks saved in the Tuesday shadow copy), and finally “undo” the changes made between Monday and Tuesday. So the oldest shadow copy is dependent on all the more recent shadow copies.


No. When you delete a file, all that Windows does is remove the corresponding entry (file name, path, properties) from the Master File Table. The blocks (units of disk space) that contained the file’s contents are marked as unused, but they are not actually deleted. So all the data that was in the file is still there in the same blocks, until the blocks get overwritten (e.g. when you copy another file to the same volume).

Therefore, if you delete a 700 MB movie file, Volume Shadow Copy does not have to back up 700 MB of data. Because it operates on the block level, it does not have to back up anything, as the blocks occupied by the file are unchanged! The only thing it has to back up is the blocks occupied by the Master File Table, which has changed.

If you then start copying other files to the same disk, some of the blocks formerly occupied by the 700 MB file will get overwritten. VSC will make backups of these blocks as they get overwritten.


Not much — VSS simply starts backing up the data to a new place, while leaving the “old place” there (at least until it runs out of space). Now you have two places to which you can restore your system, each representing a different point in time. When you create a restore point, you’re simply telling VSS: “I want to be able to go back to this point in time”.

Note that it’s a mistake to think that VSS is backing up every change you make! It only backs up enough to enable you to go to a specific point in time. Here’s an example scenario to clear things up:

  1. You create a file (version #1)
  2. You create a restore point
  3. You change the file (resulting in version #2) — VSS backs up version #1
  4. A week later, you change the file again (resulting in version #3) — VSS doesn’t back anything up, because it already has version #1 backed up. As a result, you can no longer go back to version #2. You can only go back to version #1 — the one that existed when the restore point was created.

(Note that actually VSS doesn’t operate on files but on blocks, but the principle is the same.)


Suppose you decide to protect one of your documents from prying eyes. First, you create an encrypted copy using an encryption application. Then, you “wipe” (or “secure-delete”) the original document, which consists of overwriting it several times and deleting it. (This is necessary, because if you just deleted the document without overwriting it, all the data that was in the file would physically remain on the disk until it got overwritten by other data. See question above for an explanation of how file deletion works.)

Ordinarily, this would render the original, unencrypted document irretrievable. However, if the original file was stored on a volume protected by the Volume Shadow Copy service and it was there when a restore point was created, the original file will be retrievable using Previous versions. All you need to do is right-click the containing folder, click Restore previous versions, open a snapshot, and, lo and behold, you’ll see the original file that you tried so hard to delete!

The reason wiping the file doesn’t help, of course, is that before the file’s blocks get overwritten, VSC will save them to the shadow copy. It doesn’t matter how many times you overwrite the file, the shadow copy will still be there, safely stored on a hidden volume.


No. Shadow copies are read-only, so there is no way to delete a file from all the shadow copies.

A partial solution is to delete all the shadow copies (by choosing Control Panel | System | System protection | Configure | Delete) before you wipe the file. This prevents VSC from making a copy of the file right before you overwrite it. However, it is quite possible that one of the shadow copies you just deleted already contained a copy of the file (for example, because it had recently been modified). Since deleting the shadow copies does not wipe the disk space that was occupied by them, the contents of the shadowed file will still be there on the disk.

So, if you really wanted to be secure, you would also have to wipe the blocks that used to contain the shadow copies. This would be very hard to do, as there is no direct access to that area of the disk.

Some other solutions to consider:

  • You could make sure you never save any sensitive data on a volume that’s protected by VSC. Of course, you would need a separate VSC-free volume for such data.
  • system_protectionYou could disable VSC altogether. (After disabling VSC, you may want to wipe the free space on your drive to overwrite the blocks previously occupied by VSC, which could contain shadow copies of your sensitive data.) However, if you disable VSC, you also lose System Restore functionality. Curiously, Windows offers no option to enable VSC only for system files. If you want to protect your system, you also have to enable Previous versions (see screenshot to the right).
  • The most secure approach is to use an encrypted system volume. That way, no matter what temporary files, shadow copies, etc. Windows creates, it will all be encrypted.

Notice that VSC only VSC only lets you recover files that existed when a restore point was created. So if the sequence of events is as follows:

create file → create restore point → make encrypted copy → overwrite original file

the original file will be recoverable. But if the sequence is:

create restore point → create file → make encrypted copy → overwrite original file

you are safe. If you make sure to encrypt and wipe files as soon as you create them, so that no restore point gets created after they are saved on disk in unencrypted form, there will be no way to recover them with VSC. However, it is not easy to control when Windows creates a restore point; for example, it can do it at any time, just because your computer happens to be idle.


Yes, but you have to edit the registry to do that. Here are detailed instructions from MSDN.


Most of the time, most of the data on your disk stays unchanged. However, suppose you uninstall a 5 GB game and then install another 5 GB game in its place. This means that 5 GB worth of blocks got overwritten and had to be backed up by VSC.

In such “high-churn” scenarios, VSC can run out of space pretty quickly. What happens then? VSC deletes as many previous shadow copies as necessary, starting from the oldest, until it has enough space for the latest copy. In the rare event that there isn’t enough space even for the one most recent copy, all the shadow copies will be deleted. There are no partial copies.

Thanks to Adi Oltean, who was one of the engineers of Volume Shadow Copy at Microsoft, for answering my questions on the subject.

Source :