Jump to content

too strict on min disk free alarm? (5%/300GB drive)


Recommended Posts

Guest Name withheld by request
Posted

This is a filesystem question, but I could not find an appropriate

newsgroup - pls suggest one, thx.

 

We have an appliaction owner that claims our corporate standard of

telephoning them at all hours day or night, when the

disk is more that 95% full is out-dated for any drive that is over

100GB or so.

 

He argues that the filesystem (NTFS) on his server's 300GB drive can

still operate efficiently when it is 97 or 98% full. Is this

correct?

 

Can someone give a good argument for keeping at least 5% or more free

disk space, even when 5% may be many GB, or is this out of date thinking?

 

This is on a SAN, so we never defrag it, the EMC SAN has it's

own approach to that issue.

 

--

thanks much

Tom

  • Replies 4
  • Created
  • Last Reply
Guest SBS Rocker
Posted

Re: too strict on min disk free alarm? (5%/300GB drive)

 

Well I personally have always used the standard that back in the old NT4

days was that to achieve maximum performance on a drive that you should keep

at least 25% free. These days with larger disks and arrays 25% from a 300GB

array could mean 75GB which is quite a bit to try to keep unused. So IMHO 5%

may be a bit too low but 10-15% is definitely a good benchmark.

 

 

"Name withheld by request" <anonb6e9@nyx3.nyx.net> wrote in message

news:1195073728.805524@irys.nyx.net...

> This is a filesystem question, but I could not find an appropriate

> newsgroup - pls suggest one, thx.

>

> We have an appliaction owner that claims our corporate standard of

> telephoning them at all hours day or night, when the

> disk is more that 95% full is out-dated for any drive that is over

> 100GB or so.

>

> He argues that the filesystem (NTFS) on his server's 300GB drive can

> still operate efficiently when it is 97 or 98% full. Is this

> correct?

>

> Can someone give a good argument for keeping at least 5% or more free

> disk space, even when 5% may be many GB, or is this out of date thinking?

>

> This is on a SAN, so we never defrag it, the EMC SAN has it's

> own approach to that issue.

>

> --

> thanks much

> Tom

Guest Name withheld by request
Posted

Re: too strict on min disk free alarm? (5%/300GB drive)

 

In article <ufvh7YwJIHA.4688@TK2MSFTNGP06.phx.gbl>,

SBS Rocker <noreply@NoDomain.com> wrote:

>Well I personally have always used the standard that back in the old NT4

>days was that to achieve maximum performance on a drive that you should keep

>at least 25% free. These days with larger disks and arrays 25% from a 300GB

>array could mean 75GB which is quite a bit to try to keep unused. So IMHO 5%

>may be a bit too low but 10-15% is definitely a good benchmark.

 

Thanks for your response.

 

Take a look at Daniel James response in another thread:

 

uk.comp.homebuilt #97976 (1 + 912 more) <1>+-<1>--<1>+-<1>--<1>

Date: Wed Nov 07 03:03:26 MST 2007 | \-<1>

[1] Re: Odd Harddrive problem |-<1>

From: Daniel James <wastebasket@nospam.aaisp.org> |-<1>--<1>--<1>

Reply-To: wastebasket@nospam.aaisp.org |-<1>

Lines: 20 \-(1)--[1]--<1>--<1>

 

In article news:<fgqakm$r60$1@news.datemas.de>, Dave J. wrote:

> I've got a strange problem with a 200GB harddisk. It passes checkdisk,

> though I haven't done a sector by sector for fear of it writing off my > boot drive while I can well do without a holdup. It seems to function > correctly, no corrupted data that I've noticed so far, but it's > frequently really *really* slow.

 

What filesystem, and how full is it?

 

One reason I ask is that when an NTFS disk is getting full it changes its

allocation strategy so as to make more efficient use of disk space at the

expense of speed. If your disk is NTFS and is (or has been) more than 90%

full this could be what's happening.

Guest Name withheld by request
Posted

Re: too strict on min disk free alarm? (5%/300GB drive)

 

In article <1195073728.805524@irys.nyx.net>,

Name withheld by request <anonb6e9@nyx3.nyx.net> wrote:

>This is a filesystem question, but I could not find an appropriate

>newsgroup - pls suggest one, thx.

>

>We have an appliaction owner that claims our corporate standard of

>telephoning them at all hours day or night, when the

>disk is more that 95% full is out-dated for any drive that is over

>100GB or so.

>

>He argues that the filesystem (NTFS) on his server's 300GB drive can

>still operate efficiently when it is 97 or 98% full. Is this

>correct?

 

One reason to keep some free space is that recursively changing

the permissions on a directory requires some free space on the

drive w/that dir - apparently the perms change needs to write

something (temp files?) to the drive other than just security

descriptors.

 

I learned this the hard way, on a 600GB drive. Ten minutes or so,

after issuing a command to recursively change the permissions of a dir

with several 100GB worth of files, the command errored out, and

explorer.exe died on the console of our production server. I was able

to ctrl-alt-del, to bring up task manager and gracefully reboot. It

turns out the disk was full at some point during the command.

Guest SBS Rocker
Posted

Re: too strict on min disk free alarm? (5%/300GB drive)

 

Well it does appear everyone has their own preferences as to what the

standard is as far as "free space" is concerned and how it affects the

systems ability to function at maximum performance. What is clear is that

you do need a good ratio availabe to achieve your servers best performance.

To each their own as the only true way of measuring it is to compare

performance at different intervals of % of free space. Look at it this way.

you can never have "too" much and a good starting point IMHO is 15-20%. I

just had a server reaching 5% (less than 10GB) so I added another 73GB drive

to my array. And this isn't even my OS drive. So another point to keep in

mind is to prevent a system crash by running out of disk space.

According to your specs with a 300GB at 98% full it only has approximately

6GB free and if someone unknowingly were to try to upload or download say

10GB of data then the possibility of a drive crashing is there.

 

 

"Name withheld by request" <anonb6e9@nyx.net> wrote in message

news:1195142302.728307@irys.nyx.net...

> In article <1195073728.805524@irys.nyx.net>,

> Name withheld by request <anonb6e9@nyx3.nyx.net> wrote:

>>This is a filesystem question, but I could not find an appropriate

>>newsgroup - pls suggest one, thx.

>>

>>We have an appliaction owner that claims our corporate standard of

>>telephoning them at all hours day or night, when the

>>disk is more that 95% full is out-dated for any drive that is over

>>100GB or so.

>>

>>He argues that the filesystem (NTFS) on his server's 300GB drive can

>>still operate efficiently when it is 97 or 98% full. Is this

>>correct?

>

> One reason to keep some free space is that recursively changing

> the permissions on a directory requires some free space on the

> drive w/that dir - apparently the perms change needs to write

> something (temp files?) to the drive other than just security

> descriptors.

>

> I learned this the hard way, on a 600GB drive. Ten minutes or so,

> after issuing a command to recursively change the permissions of a dir

> with several 100GB worth of files, the command errored out, and

> explorer.exe died on the console of our production server. I was able

> to ctrl-alt-del, to bring up task manager and gracefully reboot. It

> turns out the disk was full at some point during the command.


×
×
  • Create New...