Jump to content

Recommended Posts

Guest Leythos
Posted

Started playing with Vista again and had to add 5 different subnet

ranges in the firewall in order to get Vista updates, so, considering

Win XP, Office XP, 2003, 2007, Vista, Servers, etc.. I have about 15

sets of subnets (ranges) needed to allow CAB/EXE and other content from.

 

MS, Please pick on /24 range and use it for all of your update sites.

 

--

 

Leythos

- Igitur qui desiderat pacem, praeparet bellum.

- Calling an illegal alien an "undocumented worker" is like calling a

drug dealer an "unlicensed pharmacist"

spam999free@rrohio.com (remove 999 for proper email address)

Guest DevilsPGD
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

In message <MPG.2111cf00343ea0e1989831@adfree.Usenet.com> Leythos

<void@nowhere.lan> wrote:

>Started playing with Vista again and had to add 5 different subnet

>ranges in the firewall in order to get Vista updates, so, considering

>Win XP, Office XP, 2003, 2007, Vista, Servers, etc.. I have about 15

>sets of subnets (ranges) needed to allow CAB/EXE and other content from.

>

>MS, Please pick on /24 range and use it for all of your update sites.

 

Perhaps you should use a larger CIDR range then a /24?

 

--

If quitters never win, and winners never quit,

what fool came up with, "Quit while you're ahead"?

Guest Leythos
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

In article <ld2ga3dbln975na5c46gogpoc0sd9vgfot@4ax.com>,

spam_narf_spam@crazyhat.net says...

> In message <MPG.2111cf00343ea0e1989831@adfree.Usenet.com> Leythos

> <void@nowhere.lan> wrote:

>

> >Started playing with Vista again and had to add 5 different subnet

> >ranges in the firewall in order to get Vista updates, so, considering

> >Win XP, Office XP, 2003, 2007, Vista, Servers, etc.. I have about 15

> >sets of subnets (ranges) needed to allow CAB/EXE and other content from.

> >

> >MS, Please pick on /24 range and use it for all of your update sites.

>

> Perhaps you should use a larger CIDR range then a /24?

 

I could, but there is no clear sign from MS as to what IP's they are

using. In many cases the same company that provides their downloads also

provides other companies downloads in the same block.

 

So, maybe MS should pick one subnet, since they can't possibly need more

than a /24 to provide updates, and publish it for us network admins?

 

--

 

Leythos

- Igitur qui desiderat pacem, praeparet bellum.

- Calling an illegal alien an "undocumented worker" is like calling a

drug dealer an "unlicensed pharmacist"

spam999free@rrohio.com (remove 999 for proper email address)

Guest Mike Brannigan
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

"Leythos" <void@nowhere.lan> wrote in message

news:MPG.21126ba36448e41798983d@adfree.Usenet.com...

> In article <ld2ga3dbln975na5c46gogpoc0sd9vgfot@4ax.com>,

> spam_narf_spam@crazyhat.net says...

>> In message <MPG.2111cf00343ea0e1989831@adfree.Usenet.com> Leythos

>> <void@nowhere.lan> wrote:

>>

>> >Started playing with Vista again and had to add 5 different subnet

>> >ranges in the firewall in order to get Vista updates, so, considering

>> >Win XP, Office XP, 2003, 2007, Vista, Servers, etc.. I have about 15

>> >sets of subnets (ranges) needed to allow CAB/EXE and other content from.

>> >

>> >MS, Please pick on /24 range and use it for all of your update sites.

>>

>> Perhaps you should use a larger CIDR range then a /24?

>

> I could, but there is no clear sign from MS as to what IP's they are

> using. In many cases the same company that provides their downloads also

> provides other companies downloads in the same block.

>

> So, maybe MS should pick one subnet, since they can't possibly need more

> than a /24 to provide updates, and publish it for us network admins?

>

> --

 

You should not be using specific addresses to access any Microsoft service -

be that activation, downloads etc.

Microsoft operates a number of layers of protection against various forms of

Internet based attack that include the rapid changing of IP addresses for

key services.

If you try and use specific addresses there is no guarantee that these will

remain valid for any period of time.

Maybe you need to reconsider your firewall and blocking strategy some more

and use either better tools or an alternative strategy for controlling

access from your network to external services.

(Blocking IP ranges is not a via solution longterm)

 

--

 

Mike Brannigan

"Leythos" <void@nowhere.lan> wrote in message

news:MPG.21126ba36448e41798983d@adfree.Usenet.com...

> In article <ld2ga3dbln975na5c46gogpoc0sd9vgfot@4ax.com>,

> spam_narf_spam@crazyhat.net says...

>> In message <MPG.2111cf00343ea0e1989831@adfree.Usenet.com> Leythos

>> <void@nowhere.lan> wrote:

>>

>> >Started playing with Vista again and had to add 5 different subnet

>> >ranges in the firewall in order to get Vista updates, so, considering

>> >Win XP, Office XP, 2003, 2007, Vista, Servers, etc.. I have about 15

>> >sets of subnets (ranges) needed to allow CAB/EXE and other content from.

>> >

>> >MS, Please pick on /24 range and use it for all of your update sites.

>>

>> Perhaps you should use a larger CIDR range then a /24?

>

> I could, but there is no clear sign from MS as to what IP's they are

> using. In many cases the same company that provides their downloads also

> provides other companies downloads in the same block.

>

> So, maybe MS should pick one subnet, since they can't possibly need more

> than a /24 to provide updates, and publish it for us network admins?

>

> --

>

> Leythos

> - Igitur qui desiderat pacem, praeparet bellum.

> - Calling an illegal alien an "undocumented worker" is like calling a

> drug dealer an "unlicensed pharmacist"

> spam999free@rrohio.com (remove 999 for proper email address)

Guest DevilsPGD
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

In message <MPG.21126ba36448e41798983d@adfree.Usenet.com> Leythos

<void@nowhere.lan> wrote:

>In article <ld2ga3dbln975na5c46gogpoc0sd9vgfot@4ax.com>,

>spam_narf_spam@crazyhat.net says...

>> In message <MPG.2111cf00343ea0e1989831@adfree.Usenet.com> Leythos

>> <void@nowhere.lan> wrote:

>>

>> >Started playing with Vista again and had to add 5 different subnet

>> >ranges in the firewall in order to get Vista updates, so, considering

>> >Win XP, Office XP, 2003, 2007, Vista, Servers, etc.. I have about 15

>> >sets of subnets (ranges) needed to allow CAB/EXE and other content from.

>> >

>> >MS, Please pick on /24 range and use it for all of your update sites.

>>

>> Perhaps you should use a larger CIDR range then a /24?

>

>I could, but there is no clear sign from MS as to what IP's they are

>using. In many cases the same company that provides their downloads also

>provides other companies downloads in the same block.

 

Ahh, true enough.

>So, maybe MS should pick one subnet, since they can't possibly need more

>than a /24 to provide updates, and publish it for us network admins?

 

Perhaps a WSUS server would be more to your needs?

 

--

If quitters never win, and winners never quit,

what fool came up with, "Quit while you're ahead"?

Guest Steve Riley [MSFT]
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

IP addresses are spoofable, so they are not appropriate for making security

decisions. Only when you're using IPsec can you do this, because then the

cryptographic signatures appended to the datagrams provide a mechanism for

you to trust originating addresses.

 

We purposefully change the IP addresses regularly to prevent various kinds

of attacks.

 

Steve Riley

steve.riley@microsoft.com

http://blogs.technet.com/steriley

 

 

"Leythos" <void@nowhere.lan> wrote in message

news:MPG.2111cf00343ea0e1989831@adfree.Usenet.com...

> Started playing with Vista again and had to add 5 different subnet

> ranges in the firewall in order to get Vista updates, so, considering

> Win XP, Office XP, 2003, 2007, Vista, Servers, etc.. I have about 15

> sets of subnets (ranges) needed to allow CAB/EXE and other content from.

>

> MS, Please pick on /24 range and use it for all of your update sites.

>

> --

>

> Leythos

> - Igitur qui desiderat pacem, praeparet bellum.

> - Calling an illegal alien an "undocumented worker" is like calling a

> drug dealer an "unlicensed pharmacist"

> spam999free@rrohio.com (remove 999 for proper email address)

Guest Leythos
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

In article <1CC1ABE2-E961-4560-B908-38E896689A22@microsoft.com>,

steve.riley@microsoft.com says...

> IP addresses are spoofable, so they are not appropriate for making security

> decisions. Only when you're using IPsec can you do this, because then the

> cryptographic signatures appended to the datagrams provide a mechanism for

> you to trust originating addresses.

>

> We purposefully change the IP addresses regularly to prevent various kinds

> of attacks.

 

And as a normal measure of security we don't allow unrestricted access

to the net, we don't allow CAB, EXE, and a bunch of other files via HTTP

or SMTP. We only allow web access to partner sites and a few white-

listed sites, this keeps the network secure, along with many other

measures.

 

I tend to enter subnets for the MS update sites, a /24 or a /28

depending on what I think the range will be, but never just a single IP

as I know the IP will change in that range.

 

What would be nice, since we have never had a hacked customer, is if we

could have a list of IP ranges used by the different update providers. I

don't have a problem with MS changing them, but it sure would be nice to

know what they are so that we can get them in the system.

 

As for WSUS - we still need to know what the update sites are, we don't

even allow the servers to get updates unless it's an approved

subnet/network.

 

Since this is a "security" group, I would think that others would

commonly block all users from code downloads as a standard practice and

only allow code downloads from approved site....

 

--

 

Leythos

- Igitur qui desiderat pacem, praeparet bellum.

- Calling an illegal alien an "undocumented worker" is like calling a

drug dealer an "unlicensed pharmacist"

spam999free@rrohio.com (remove 999 for proper email address)

Guest Mike Brannigan
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

"Leythos" <void@nowhere.lan> wrote in message

news:MPG.2112eef792c3b8c4989844@adfree.Usenet.com...

> In article <1CC1ABE2-E961-4560-B908-38E896689A22@microsoft.com>,

> steve.riley@microsoft.com says...

>> IP addresses are spoofable, so they are not appropriate for making

>> security

>> decisions. Only when you're using IPsec can you do this, because then the

>> cryptographic signatures appended to the datagrams provide a mechanism

>> for

>> you to trust originating addresses.

>>

>> We purposefully change the IP addresses regularly to prevent various

>> kinds

>> of attacks.

>

> And as a normal measure of security we don't allow unrestricted access

> to the net, we don't allow CAB, EXE, and a bunch of other files via HTTP

> or SMTP. We only allow web access to partner sites and a few white-

> listed sites, this keeps the network secure, along with many other

> measures.

>

> I tend to enter subnets for the MS update sites, a /24 or a /28

> depending on what I think the range will be, but never just a single IP

> as I know the IP will change in that range.

>

> What would be nice, since we have never had a hacked customer, is if we

> could have a list of IP ranges used by the different update providers. I

> don't have a problem with MS changing them, but it sure would be nice to

> know what they are so that we can get them in the system.

>

> As for WSUS - we still need to know what the update sites are, we don't

> even allow the servers to get updates unless it's an approved

> subnet/network.

>

> Since this is a "security" group, I would think that others would

> commonly block all users from code downloads as a standard practice and

> only allow code downloads from approved site....

>

> --

>

> Leythos

 

 

Leythos,

 

As I responded in a similar manner to Steve a few hours earlier it is not a

case of even a range being made public. Microsoft reserve the right to

alter the IP addresses for all public facing services as and when they see

fit - publishing specific ranges would pose a threat to the stability of the

service as this would be simply giving potential attacks a know set of

ranges they can simple target for DOS or other forms of attack. I realize

that it would be possible to work out the entire range that the various

providers of service to Microsoft use and target these but there are many

and it would make the attack surface potentially significantly larger and an

attack even easier to detect etc.

So in short Microsoft is unlikely to make available anything other then the

public facing DNS name for their services.

Maybe you should look at alternative approaches to this.

Consider if you direct your clients to use an internal DNS server that is

configured to only forward for name resolution (conditional forwarding) only

names that meet certain criteria such as *.microsoft.com and your other

white listed sites. This would allow only those sites to be then resolved

by the DNS servers that you choose to use externally and thus accesses.

I realize this does not prevent a direct access if someone knows an IP

address to type into a URL but it is a start while you look at alternative

strategies.

If you use a proxy server at the edge of your network you will be able to

log all access to URLs with in IP address in it and then take appropriate

action against that member of staff etc..

--

 

Mike Brannigan

 

"Leythos" <void@nowhere.lan> wrote in message

news:MPG.2112eef792c3b8c4989844@adfree.Usenet.com...

> In article <1CC1ABE2-E961-4560-B908-38E896689A22@microsoft.com>,

> steve.riley@microsoft.com says...

>> IP addresses are spoofable, so they are not appropriate for making

>> security

>> decisions. Only when you're using IPsec can you do this, because then the

>> cryptographic signatures appended to the datagrams provide a mechanism

>> for

>> you to trust originating addresses.

>>

>> We purposefully change the IP addresses regularly to prevent various

>> kinds

>> of attacks.

>

> And as a normal measure of security we don't allow unrestricted access

> to the net, we don't allow CAB, EXE, and a bunch of other files via HTTP

> or SMTP. We only allow web access to partner sites and a few white-

> listed sites, this keeps the network secure, along with many other

> measures.

>

> I tend to enter subnets for the MS update sites, a /24 or a /28

> depending on what I think the range will be, but never just a single IP

> as I know the IP will change in that range.

>

> What would be nice, since we have never had a hacked customer, is if we

> could have a list of IP ranges used by the different update providers. I

> don't have a problem with MS changing them, but it sure would be nice to

> know what they are so that we can get them in the system.

>

> As for WSUS - we still need to know what the update sites are, we don't

> even allow the servers to get updates unless it's an approved

> subnet/network.

>

> Since this is a "security" group, I would think that others would

> commonly block all users from code downloads as a standard practice and

> only allow code downloads from approved site....

>

> --

>

> Leythos

> - Igitur qui desiderat pacem, praeparet bellum.

> - Calling an illegal alien an "undocumented worker" is like calling a

> drug dealer an "unlicensed pharmacist"

> spam999free@rrohio.com (remove 999 for proper email address)

Guest Leythos
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

In article <C2303C6C-D54B-4B66-AB9F-08B4A4202F31@microsoft.com>,

Mike.Brannigan@localhost says...

> So in short Microsoft is unlikely to make available anything other then the

> public facing DNS name for their services.

> Maybe you should look at alternative approaches to this.

> Consider if you direct your clients to use an internal DNS server that is

> configured to only forward for name resolution (conditional forwarding) only

> names that meet certain criteria such as *.microsoft.com and your other

> white listed sites. This would allow only those sites to be then resolved

> by the DNS servers that you choose to use externally and thus accesses.

> I realize this does not prevent a direct access if someone knows an IP

> address to type into a URL but it is a start while you look at alternative

> strategies.

> If you use a proxy server at the edge of your network you will be able to

> log all access to URLs with in IP address in it and then take appropriate

> action against that member of staff etc..

 

Mike, Steve,

 

And there lies the problem for security. We already see the rejected

connections and their names and even the full file path/name, and yes,

it's easy to add them into the approved list.

 

This should be a problem for all users I would think. Where they block

the downloading of code by their users, completely, but want to allow MS

Updates to the servers and workstations. In the case of the firewalls we

have used, most of them on the market, there is no simple means to white

list your update sites as they keep changing. Yes, we could install a

proxy server, but that really seems like a waste when the only place we

have a problem with is MS.

 

I understand your reasons, but it's a catch 22, move your stuff around

to limit your exposure or force customers to either purchase more

hardware or to allow code to be downloaded from unknown sites.

 

I'll stick with watching for the Windows Update failures in the logs and

manually adding the networks as needed - at least this way our networks

remain secure.

 

--

 

Leythos

- Igitur qui desiderat pacem, praeparet bellum.

- Calling an illegal alien an "undocumented worker" is like calling a

drug dealer an "unlicensed pharmacist"

spam999free@rrohio.com (remove 999 for proper email address)

Guest Kerry Brown
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

"Leythos" <void@nowhere.lan> wrote in message

news:MPG.2113c25bdf4c48db98984e@adfree.Usenet.com...

> In article <C2303C6C-D54B-4B66-AB9F-08B4A4202F31@microsoft.com>,

> Mike.Brannigan@localhost says...

>> So in short Microsoft is unlikely to make available anything other then

>> the

>> public facing DNS name for their services.

>> Maybe you should look at alternative approaches to this.

>> Consider if you direct your clients to use an internal DNS server that is

>> configured to only forward for name resolution (conditional forwarding)

>> only

>> names that meet certain criteria such as *.microsoft.com and your other

>> white listed sites. This would allow only those sites to be then

>> resolved

>> by the DNS servers that you choose to use externally and thus accesses.

>> I realize this does not prevent a direct access if someone knows an IP

>> address to type into a URL but it is a start while you look at

>> alternative

>> strategies.

>> If you use a proxy server at the edge of your network you will be able to

>> log all access to URLs with in IP address in it and then take appropriate

>> action against that member of staff etc..

>

> Mike, Steve,

>

> And there lies the problem for security. We already see the rejected

> connections and their names and even the full file path/name, and yes,

> it's easy to add them into the approved list.

>

> This should be a problem for all users I would think. Where they block

> the downloading of code by their users, completely, but want to allow MS

> Updates to the servers and workstations. In the case of the firewalls we

> have used, most of them on the market, there is no simple means to white

> list your update sites as they keep changing. Yes, we could install a

> proxy server, but that really seems like a waste when the only place we

> have a problem with is MS.

>

> I understand your reasons, but it's a catch 22, move your stuff around

> to limit your exposure or force customers to either purchase more

> hardware or to allow code to be downloaded from unknown sites.

>

> I'll stick with watching for the Windows Update failures in the logs and

> manually adding the networks as needed - at least this way our networks

> remain secure.

>

 

 

Use WSUS and only allow the WSUS server to download updates.

 

--

Kerry Brown

Microsoft MVP - Shell/User

http://www.vistahelp.ca

Guest Mike Brannigan
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

"Leythos" <void@nowhere.lan> wrote in message

news:MPG.2113c25bdf4c48db98984e@adfree.Usenet.com...

> In article <C2303C6C-D54B-4B66-AB9F-08B4A4202F31@microsoft.com>,

> Mike.Brannigan@localhost says...

>> So in short Microsoft is unlikely to make available anything other then

>> the

>> public facing DNS name for their services.

>> Maybe you should look at alternative approaches to this.

>> Consider if you direct your clients to use an internal DNS server that is

>> configured to only forward for name resolution (conditional forwarding)

>> only

>> names that meet certain criteria such as *.microsoft.com and your other

>> white listed sites. This would allow only those sites to be then

>> resolved

>> by the DNS servers that you choose to use externally and thus accesses.

>> I realize this does not prevent a direct access if someone knows an IP

>> address to type into a URL but it is a start while you look at

>> alternative

>> strategies.

>> If you use a proxy server at the edge of your network you will be able to

>> log all access to URLs with in IP address in it and then take appropriate

>> action against that member of staff etc..

>

> Mike, Steve,

>

> And there lies the problem for security. We already see the rejected

> connections and their names and even the full file path/name, and yes,

> it's easy to add them into the approved list.

>

> This should be a problem for all users I would think. Where they block

> the downloading of code by their users, completely, but want to allow MS

> Updates to the servers and workstations. In the case of the firewalls we

> have used, most of them on the market, there is no simple means to white

> list your update sites as they keep changing. Yes, we could install a

> proxy server, but that really seems like a waste when the only place we

> have a problem with is MS.

>

> I understand your reasons, but it's a catch 22, move your stuff around

> to limit your exposure or force customers to either purchase more

> hardware or to allow code to be downloaded from unknown sites.

>

> I'll stick with watching for the Windows Update failures in the logs and

> manually adding the networks as needed - at least this way our networks

> remain secure.

>

> --

>

> Leythos

 

You have highlighted your own biggest problem here - "but want to allow MS

Updates to the servers and workstations." - ABSOLUTELY NOT.

NO never ever ever in a production corporate environment do you allow ANY of

your workstations and servers to directly access anyone for patches or

updates

I have never allowed this or even seen it in real large or enterprise

customers. (the only place it may crop up is in mom and pop 10 PCs and a

Server shops).

If you want to patch your systems you do so in a properly controlled manner

using the appropriate centrally managed distribution tools - such as WSUS

for small medium and System Center Configuration Manager 2007 or similar

products from other vendors.

You download the patches etc or allow your WSUS or similar product to

download them from the vendor - you then regression test them for your

environment (hardware and software etc) then you approve them for deployment

and deploy to the servers and workstations form inside your secure corporate

network. Now it is not a problem to let that one server do its downloads

from the vendors (this is just the same as you would do for anti virus

updates - you download them to an internal distribution server etc).

 

As you said your only problem is with Microsoft then the solution I have

outlined above is the fix - only one server needs access through your

draconian firewall policies. And you get a real secure enterprise patch

management solution that significantly lowers the risk to your environment.

With the best will in the world if you are letting servers auto update all

patches from Microsoft without any degree of regression testing you have way

bigger problems then worrying about your firewall rules.

 

If you stick to watching for failures and manually updating rules you are

wasting your time, providing a poor service and getting paid for doing

something that there is no need to do.

--

 

Mike Brannigan

"Leythos" <void@nowhere.lan> wrote in message

news:MPG.2113c25bdf4c48db98984e@adfree.Usenet.com...

> In article <C2303C6C-D54B-4B66-AB9F-08B4A4202F31@microsoft.com>,

> Mike.Brannigan@localhost says...

>> So in short Microsoft is unlikely to make available anything other then

>> the

>> public facing DNS name for their services.

>> Maybe you should look at alternative approaches to this.

>> Consider if you direct your clients to use an internal DNS server that is

>> configured to only forward for name resolution (conditional forwarding)

>> only

>> names that meet certain criteria such as *.microsoft.com and your other

>> white listed sites. This would allow only those sites to be then

>> resolved

>> by the DNS servers that you choose to use externally and thus accesses.

>> I realize this does not prevent a direct access if someone knows an IP

>> address to type into a URL but it is a start while you look at

>> alternative

>> strategies.

>> If you use a proxy server at the edge of your network you will be able to

>> log all access to URLs with in IP address in it and then take appropriate

>> action against that member of staff etc..

>

> Mike, Steve,

>

> And there lies the problem for security. We already see the rejected

> connections and their names and even the full file path/name, and yes,

> it's easy to add them into the approved list.

>

> This should be a problem for all users I would think. Where they block

> the downloading of code by their users, completely, but want to allow MS

> Updates to the servers and workstations. In the case of the firewalls we

> have used, most of them on the market, there is no simple means to white

> list your update sites as they keep changing. Yes, we could install a

> proxy server, but that really seems like a waste when the only place we

> have a problem with is MS.

>

> I understand your reasons, but it's a catch 22, move your stuff around

> to limit your exposure or force customers to either purchase more

> hardware or to allow code to be downloaded from unknown sites.

>

> I'll stick with watching for the Windows Update failures in the logs and

> manually adding the networks as needed - at least this way our networks

> remain secure.

>

> --

>

> Leythos

> - Igitur qui desiderat pacem, praeparet bellum.

> - Calling an illegal alien an "undocumented worker" is like calling a

> drug dealer an "unlicensed pharmacist"

> spam999free@rrohio.com (remove 999 for proper email address)

Guest DevilsPGD
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

In message <MPG.2112eef792c3b8c4989844@adfree.Usenet.com> Leythos

<void@nowhere.lan> wrote:

>As for WSUS - we still need to know what the update sites are, we don't

>even allow the servers to get updates unless it's an approved

>subnet/network.

 

The suggestion would be to run WSUS outside your firewall, as though it

were your own personal Windows Update server on an IP you'd know and

trust for your internal clients to update.

 

(Obviously the WSUS server shouldn't be completely unprotected, but it

doesn't need to live within your LAN and have unrestricted internet

access at the same time)

 

--

If quitters never win, and winners never quit,

what fool came up with, "Quit while you're ahead"?

Guest Leythos
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

In article <3s1ka3demsq7roarrb5g1dglqlnob5fovn@4ax.com>,

spam_narf_spam@crazyhat.net says...

> In message <MPG.2112eef792c3b8c4989844@adfree.Usenet.com> Leythos

> <void@nowhere.lan> wrote:

>

> >As for WSUS - we still need to know what the update sites are, we don't

> >even allow the servers to get updates unless it's an approved

> >subnet/network.

>

> The suggestion would be to run WSUS outside your firewall, as though it

> were your own personal Windows Update server on an IP you'd know and

> trust for your internal clients to update.

>

> (Obviously the WSUS server shouldn't be completely unprotected, but it

> doesn't need to live within your LAN and have unrestricted internet

> access at the same time)

 

Yep, and we could do that, even inside the LAN and allow exceptions for

it. In the case of most of our clients, with a very few exceptions, even

locations with several hundred nodes in the lan, we've never had a

problem allowing the workstations to auto download/install the windows

updates, not since it was available. On certain machines we select to

download and then manually install, but for the masses of clients

machines we just allow them to auto-update and have never had any

problems with that method. Servers, manual only.

 

About half our clients are under 100 nodes on the lan, they most often

have one or two two servers and we could install WSUS on one or the

single server, but the servers are very stable and adding another

component to them might not provide the same stability - so, it's still

a catch-22, but WSUS might just be the only real way around this.

 

Thanks

 

--

 

Leythos

- Igitur qui desiderat pacem, praeparet bellum.

- Calling an illegal alien an "undocumented worker" is like calling a

drug dealer an "unlicensed pharmacist"

spam999free@rrohio.com (remove 999 for proper email address)

Guest Leythos
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

In article <56789F44-352D-4AD6-A2CB-F590FB624EB6@microsoft.com>,

Mike.Brannigan@localhost says...

> You have highlighted your own biggest problem here - "but want to allow MS

> Updates to the servers and workstations." - ABSOLUTELY NOT.

> NO never ever ever in a production corporate environment do you allow ANY of

> your workstations and servers to directly access anyone for patches or

> updates

 

I should have been clearer on the servers, sorry - we download but

manually install on all servers and on specific function workstations.

 

In all this time we've never had a problem with automatic install on the

workstations (and we have specific machines that we manually install on)

in the production environment.

 

So, the idea of not allowing automatic updates to most workstations has

never been a problem.

 

The real problem is that even if we set them to manual, that the could

not get the updates unless we enter exceptions in the firewall for the

MS Update sites. This is what I experienced with another install of

Vista, 29 updates and not a single one was from the same list of ranges

that we get the XP/Office/Server updates from... So, even manual install

fails in that case.

 

Based on another post I guess I'm going to have to install WSUS and just

allow all exe/cab/code files to be pulled in HTTP sessions to that

server.

 

--

 

Leythos

- Igitur qui desiderat pacem, praeparet bellum.

- Calling an illegal alien an "undocumented worker" is like calling a

drug dealer an "unlicensed pharmacist"

spam999free@rrohio.com (remove 999 for proper email address)

Guest DevilsPGD
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

In message <MPG.2113d9776ab5f488989851@adfree.Usenet.com> Leythos

<void@nowhere.lan> wrote:

>About half our clients are under 100 nodes on the lan, they most often

>have one or two two servers and we could install WSUS on one or the

>single server, but the servers are very stable and adding another

>component to them might not provide the same stability - so, it's still

>a catch-22, but WSUS might just be the only real way around this.

 

Either that, or use hostnames rather then IPs in your firewalling...

 

--

If quitters never win, and winners never quit,

what fool came up with, "Quit while you're ahead"?

Guest Leythos
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

In article <rg4ka3h2om69d2nfhrhv5eo8hfb4s16shk@4ax.com>,

spam_narf_spam@crazyhat.net says...

> In message <MPG.2113d9776ab5f488989851@adfree.Usenet.com> Leythos

> <void@nowhere.lan> wrote:

>

> >About half our clients are under 100 nodes on the lan, they most often

> >have one or two two servers and we could install WSUS on one or the

> >single server, but the servers are very stable and adding another

> >component to them might not provide the same stability - so, it's still

> >a catch-22, but WSUS might just be the only real way around this.

>

> Either that, or use hostnames rather then IPs in your firewalling...

 

I wish I could, the firewalls covert names to IP for that function.

 

--

 

Leythos

- Igitur qui desiderat pacem, praeparet bellum.

- Calling an illegal alien an "undocumented worker" is like calling a

drug dealer an "unlicensed pharmacist"

spam999free@rrohio.com (remove 999 for proper email address)

Guest cquirke (MVP Windows shell/user)
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

On Fri, 27 Jul 2007 15:13:52 +0100, "Mike Brannigan"

>"Leythos" <void@nowhere.lan> wrote in message

>> Mike.Brannigan@localhost says...

 

This thread is about the collision between...

 

No automatic code base changes allowed

 

....and...

 

Vendors need to push "code of the day"

 

Given the only reason we allow vendors to push "code of the day" is

because their existing code fails too often for us to manage manually,

one wonders if our trust in these vendors is well-placed.

 

A big part of this is knowing that only the vendor is pushing the

code, and that's hard to be sure of. If malware were to hijack a

vendor's update pipe, it could blow black code into the core of

systems, right pas all those system's defenses.

 

With that in mind, I've switched from wishing MS would use open

standards for patch transmission to being grateful for whatever they

can do to harden the process. I'd still rather not have to leave

myself open to injections of "code of the day", though.

>NO never ever ever in a production corporate environment do you allow ANY of

>your workstations and servers to directly access anyone for patches

>I have never allowed this or even seen it in real large or enterprise

>customers. (the only place it may crop up is in mom and pop

>10 PCs and a Server shops).

 

And there's the problem. MS concentrates on scaling up to enterprise

needs, where the enterprise should consolodate patches in one location

and then drive these into systems under their own in-house control.

 

So scaling up is well catered for.

 

But what about scaling down?

 

Do "mom and pop" folks not deserve safety? How about single-PC users

which have everything they own tied up in that one vulnerable box?

What's best-practice for them - "trust me, I'm a software vendor"?

 

How about scaling outwards?

 

When every single vendor wants to be able to push "updates" into your

PC, even for things as trivial as prinyers and mouse drivers, how do

you manage these? How do you manage 50 different ad-hoc update

delivery systems, some from vendors who are not much beyond "Mom and

Pop" status themselves? Do we let Zango etc. "update" themselves?

 

The bottom line: "Ship now, patch later" is an unworkable model.

>As you said your only problem is with Microsoft then the solution I have

>outlined above is the fix - only one server needs access through your

>draconian firewall policies. And you get a real secure enterprise patch

>management solution that significantly lowers the risk to your environment.

 

That's prolly the best solution, for those with the resources to

manage it. It does create a lock-in advantage for MS, but at least it

is one that is value-based (i.e. the positive value of a

well-developed enterprise-ready management system).

 

However, I have to wonder how effective in-house patch evaluation

really is, especially if it is to keep up with tight time-to-exploit

cycles. It may be the closed-source equivalent of the open source

boast that "our code is validated by a thousand reviewers"; looks good

on paper, but is it really effective in practice?

 

 

>--------------- ----- ---- --- -- - - -

To one who has never seen a hammer,

nothing looks like a nail

>--------------- ----- ---- --- -- - - -

Guest Mike Brannigan
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

In line below

 

--

 

Mike Brannigan

"cquirke (MVP Windows shell/user)" <cquirkenews@nospam.mvps.org> wrote in

message news:oas0b3pgjqdp3cb1vrrr6mb405jq3nen3r@4ax.com...

> On Fri, 27 Jul 2007 15:13:52 +0100, "Mike Brannigan"

>>"Leythos" <void@nowhere.lan> wrote in message

>>> Mike.Brannigan@localhost says...

>

> This thread is about the collision between...

>

> No automatic code base changes allowed

>

> ...and...

>

> Vendors need to push "code of the day"

>

> Given the only reason we allow vendors to push "code of the day" is

> because their existing code fails too often for us to manage manually,

> one wonders if our trust in these vendors is well-placed.

>

> A big part of this is knowing that only the vendor is pushing the

> code, and that's hard to be sure of. If malware were to hijack a

> vendor's update pipe, it could blow black code into the core of

> systems, right pas all those system's defenses.

>

> With that in mind, I've switched from wishing MS would use open

> standards for patch transmission to being grateful for whatever they

> can do to harden the process. I'd still rather not have to leave

> myself open to injections of "code of the day", though.

>

>>NO never ever ever in a production corporate environment do you allow ANY

>>of

>>your workstations and servers to directly access anyone for patches

>>I have never allowed this or even seen it in real large or enterprise

>>customers. (the only place it may crop up is in mom and pop

>>10 PCs and a Server shops).

>

> And there's the problem. MS concentrates on scaling up to enterprise

> needs, where the enterprise should consolodate patches in one location

> and then drive these into systems under their own in-house control.

>

> So scaling up is well catered for.

>

> But what about scaling down?

>

 

That is where the free WSUS 3.0 product is targeted. One server to do all

the downloads and then you approve the ones you want to deploy and then your

client PCs just get them under their normal Windows Automatic Update process

(but this time they point to your WSUS server internally instead of going

external).

> Do "mom and pop" folks not deserve safety? How about single-PC users

> which have everything they own tied up in that one vulnerable box?

> What's best-practice for them - "trust me, I'm a software vendor"?

>

 

As above for anyone one with more then a couple of PCs.

Other wise you subscribe to the security update, are notified when they are

released, use the Download catalog to get the updates test them as you see

fit hem deploy them as you see fit using any production approach you want.

 

Sorry but this is not new and has been around for the last few years - patch

management for everyone from single user to mega corp. is well understood

out there in the field, which is why I was so surprised by the OPs post and

approach.

> How about scaling outwards?

>

 

WSUS scales outwards too - unless you mean something else.

If you mean for integration with other vendors - if they wish to create

catalog then SCCM 2007 can handle import of third party or in-house catalog

data to id out of patch machines and patch etc. I suggest you read up on the

SCCM 2007 product.

> When every single vendor wants to be able to push "updates" into your

> PC, even for things as trivial as prinyers and mouse drivers, how do

> you manage these?

 

Since MSFT is taking a huge number of these patches and updated drivers then

you can continue to handle these. as regards the rest if they start using

standard catalog methods as I mentioned above then integration becomes a no

brainer. Otherwise you again have to get informed of there new updates -

most publish some form of alert etc. then download, test and deploy using

whatever tools you like in your enterprise.

> How do you manage 50 different ad-hoc update

> delivery systems, some from vendors who are not much beyond "Mom and

> Pop" status themselves? Do we let Zango etc. "update" themselves?

>

> The bottom line: "Ship now, patch later" is an unworkable model.

>

 

There is almost no such thing as flawless software and particularly when

you are talking about tens of millions of lines of code in an OS.

Every major OS vendor on the planet regularly ships patches and updates for

their products.

>>As you said your only problem is with Microsoft then the solution I have

>>outlined above is the fix - only one server needs access through your

>>draconian firewall policies. And you get a real secure enterprise patch

>>management solution that significantly lowers the risk to your

>>environment.

>

> That's prolly the best solution, for those with the resources to

> manage it. It does create a lock-in advantage for MS, but at least it

> is one that is value-based (i.e. the positive value of a

> well-developed enterprise-ready management system).

>

> However, I have to wonder how effective in-house patch evaluation

> really is, especially if it is to keep up with tight time-to-exploit

> cycles.

 

Then that is their problem and they must address it in the manner best

suited to them either by increasing their resources assigned to it or taking

a firmer approach such as only taking absolutely critical patches etc.

I have worked with enterprise across this whole spectrum from full dedicated

patch management teams that perform full and complete regression testing for

all patches they need to roll out internally to extremely poor ad hoc

solutions and minimal testing,.

> It may be the closed-source equivalent of the open source

> boast that "our code is validated by a thousand reviewers"; looks good

> on paper, but is it really effective in practice?

>

>

>

>>--------------- ----- ---- --- -- - - -

> To one who has never seen a hammer,

> nothing looks like a nail

>>--------------- ----- ---- --- -- - - -

Guest cquirke (MVP Windows shell/user)
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

On Wed, 1 Aug 2007 20:40:01 +0100, "Mike Brannigan"

>Mike Brannigan

>"cquirke

>> On Fri, 27 Jul 2007 15:13:52 +0100, "Mike Brannigan"

>>>"Leythos" <void@nowhere.lan> wrote in message

>>>> Mike.Brannigan@localhost says...

>> MS concentrates on scaling up to enterprise needs, where

>> the enterprise should consolodate patches in one location

>> and then drive these into systems under their own in-house control.

>> But what about scaling down?

>That is where the free WSUS 3.0 product is targeted. One server to do all

>the downloads and then you approve the ones you want to deploy and then your

>client PCs just get them under their normal Windows Automatic Update process

>(but this time they point to your WSUS server internally instead of going

>external).

 

Sounds good - by "one server", do you mean a server dedicated to this

task alone, or can that be the only server you have? Will it run on

SBS, or is there a different solution for that?

 

This is good, but it's still not scaling all the way down to a single

PC that has everything important on it. Those users have no choice

but to trsut patches not to mess up.

 

Windows *almost* offers a mitigation, but screws it up (or rather,

doesn't see the need to work the way I'd hoped it would).

 

There's an Automatic Updates option to "download now, but let me

decide when to install them". When I read that, I thought it would

put me in full control over such updates (e.g. "...when or IF to

install them") but it does not. If I click "no, don't install these",

it will stealth them in via the next shutdown.

 

This is a pity, because otherwise it would facilitate this policy:

- download patches as soon as available but DO NOT INSTALL

- watch for reports of problems with patches

- watch for reports of exploits

- if exploits, get offline and install already-downloaded patches

- else if no "dead bodies" reported from patch bugs, install patches

- but if reports of "dead bodies", can avoid the relevant patches

 

As it is, if I don't want MS ramming "code of the day" when my back is

turned, I have to disable downloading updates altogether, so...

- do NOT download patches as soon as available

- watch for reports of problems with patches

- watch for reports of exploits

- if exploits, have to stay online to download patches -> exploited

- if no "dead bodies" from patch bugs, downloads and install patches

- but if reports of "dead bodies", can avoid the relevant patches

>> How do you manage 50 different ad-hoc update

>> delivery systems, some from vendors who are not much beyond "Mom and

>> Pop" status themselves? Do we let Zango etc. "update" themselves?

>>

>> The bottom line: "Ship now, patch later" is an unworkable model.

>There is almost no such thing as flawless software and particularly when

>you are talking about tens of millions of lines of code in an OS.

 

Sure, and the lesson is to design and code with this in mind, reducing

automatic exposure of surfaces to arbitrary material, and ensuring

that any code can be amputated immediately, pending patch.

 

If all non-trivial code has bugs, and you need bug-free code, then the

solution is to keep that code trivial ;-)

 

I see this as akin to hi-wear surfaces within mechanical systems.

You'd nearly always design such systems so that hi-wear parts are

cheap and detatchable for field replacement, e.g. pressed steel bore

within an aluminium block, piston rings that are not built permanently

into the piston, removable crank bearings rather than ball bearings

running directly on crank and case surfaces, etc.

 

I don't see enough of that awareness in modern Windows. If anything,

the trend is in the other direction; more automated and complex

handling of material that the user has indicated no intention to

"open", poor or absent file type discipline, subsystems that cannot be

excluded from installation or uninstalled, etc.

>Every major OS vendor on the planet regularly ships patches and updates for

>their products.

 

They do indeed, yes, and many vendors are lagging behind MS in

sensible practice. For example, Sun were still allowing Java applets

to "ask" the JRE to pass them through to older versions "for backward

compatibility", and installing new JRE versions did not get rid of old

ones, allowing these to remain a threat.

 

But the bottom line is, it's a suspension of disbelief to trust patch

code (that may be hastily developed under present exploits) to work

when slid under installed applications that could not possibly have

been written for such code, especially when the reason to swallow such

code is because the same vendor messed up when writing the same code

under less-rushed pre-release circumstances.

 

What should have happened, is that the first time some unforgivable

code defect allowed widespread exploitation (say, the failure to check

MIME type against file name extension and actual material type when

processing in-line files in HTML "message text"), the vendor should

have been stomped so hard that they'd dare not make the same sort of

mistake again.

 

Instead, the norm is for swave vendors to fail so regularly that we

have to automate the process of "patching". Vendors can do this by

making the patch material available on a server, leaving it to the

user to cover the cost of obtaining it. Meanwhile, stocks of

defective product are not recalled, nor are replacement disks added to

these packages, so what you buy after the fact is still defective, and

still have to be patched at your expense.

 

Couple that with the common advice to "just" wipe and re-install, and

you will be constantly falling back to unpatched status, and having to

pull down massive wads of "repair" material - something that just is

not possible to do via pay-per-second dial-up.

 

I was impressed when MS shipped XP SP2 CDs to end users, as well as

the security roll-up CDs for Windows all the way back to Win98. But

we still need the ability to regenerate a fully-functional and

fully-patched OS installation and maintenance disk - something that

"royalty" OEMs don't provide their victims even at purchase time.

>> However, I have to wonder how effective in-house patch evaluation

>> really is, especially if it is to keep up with tight time-to-exploit

>> cycles.

>Then that is their problem

 

Not really, no. The problem arises from a bug rate within exposed

surfaces that is unsustainable for the extent of those surfaces,

forcing too many patches to manage manually. Yes, it becomes our

problem, but we didn't cause it other than by choosing to use a

platform that is so widely used that it is immediately attacked.

 

That equation not only favors minority platforms such as Mac OS and

Linux, it also favors an abandonment of personal computing for SaaS,

for which the risks are currently way under-estimated.

 

 

Note that I don't see the need to patch as an MS issue, given that (as

you mention) all equally-complex systems have similar needs to patch.

 

What puts MS on the sharp end, is the degree of exposure - like the

difference between the bearing on your boot hinge, and the bearing

that holds the crankshaft in the engine block.

 

It's been amusing to see how well (or rather, how poorly) Apple's

Safari browser has fared, when exposed to the same "wear".

 

A trend I really don't like, is where relatively trivial software

vendors jump on the "update" bandwagon, leveraging this to re-assert

thier choice of settings or "call home". It's bad enough that buggy

code quality is rewarded with tighter vendor dependency, as it is.

>I have worked with enterprise across this whole spectrum from full dedicated

>patch management teams that perform full and complete regression testing for

>all patches they need to roll out internally to extremely poor ad hoc

>solutions and minimal testing,.

 

I'm not talking entrerprises, here. They are well-positioned to

manage the problem; it's the "free" end-users with thier single PCs or

small peer-to-peer LANs I'm thinking about.

 

Collectively, all those loose systems can act as very large botnets.

>> It may be the closed-source equivalent of the open source

>> boast that "our code is validated by a thousand reviewers"; looks good

>> on paper, but is it really effective in practice?

 

 

>------------ ----- ---- --- -- - - - -

The most accurate diagnostic instrument

in medicine is the Retrospectoscope

>------------ ----- ---- --- -- - - - -

Guest Kerry Brown
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

"cquirke (MVP Windows shell/user)" <cquirkenews@nospam.mvps.org> wrote in

message news:llq4b3p39bgbsq7su9c7akjghgeq55narl@4ax.com...

> On Wed, 1 Aug 2007 20:40:01 +0100, "Mike Brannigan"

>>Mike Brannigan

>>"cquirke

>>> On Fri, 27 Jul 2007 15:13:52 +0100, "Mike Brannigan"

>>>>"Leythos" <void@nowhere.lan> wrote in message

>>>>> Mike.Brannigan@localhost says...

>

>>> MS concentrates on scaling up to enterprise needs, where

>>> the enterprise should consolodate patches in one location

>>> and then drive these into systems under their own in-house control.

>

>>> But what about scaling down?

>

>>That is where the free WSUS 3.0 product is targeted. One server to do all

>>the downloads and then you approve the ones you want to deploy and then

>>your

>>client PCs just get them under their normal Windows Automatic Update

>>process

>>(but this time they point to your WSUS server internally instead of going

>>external).

>

> Sounds good - by "one server", do you mean a server dedicated to this

> task alone, or can that be the only server you have? Will it run on

> SBS, or is there a different solution for that?

>

 

 

SBS 2003 R2 comes with WSUS out if the box.

 

--

Kerry Brown

Microsoft MVP - Shell/User

http://www.vistahelp.ca

Guest cquirke (MVP Windows shell/user)
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

On Thu, 2 Aug 2007 20:50:42 -0700, "Kerry Brown"

>"cquirke (MVP Windows shell/user)" wrote in

>> Sounds good - by "one server", do you mean a server dedicated to this

>> task alone, or can that be the only server you have? Will it run on

>> SBS, or is there a different solution for that?

>SBS 2003 R2 comes with WSUS out if the box.

 

Niiice... so that's one SBS box, WSUS built in!

 

>--------------- ----- ---- --- -- - - -

Error Messages Are Your Friends

>--------------- ----- ---- --- -- - - -

Guest Kerry Brown
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

"cquirke (MVP Windows shell/user)" <cquirkenews@nospam.mvps.org> wrote in

message news:v369b31gdcq1569ifmdndl1n09hjp1jgn4@4ax.com...

> On Thu, 2 Aug 2007 20:50:42 -0700, "Kerry Brown"

>>"cquirke (MVP Windows shell/user)" wrote in

>

>>> Sounds good - by "one server", do you mean a server dedicated to this

>>> task alone, or can that be the only server you have? Will it run on

>>> SBS, or is there a different solution for that?

>

>>SBS 2003 R2 comes with WSUS out if the box.

>

> Niiice... so that's one SBS box, WSUS built in!

>

>

 

 

An SBS install is a fairly complicated procedure and takes a few tries to

get it right the first time. WSUS is not installed by default. You have to

install it. If you follow the instructions in the readme files it is

installed. If you stick the first CD in (or the only DVD) and just let the

install run clicking on "Next" it doesn't get installed. WSUS does need

quite a bit of resources. The SQL instance it uses will grow to a point

where it is hogging all the free RAM if you don't throttle it back manually.

It takes a lot of disk space. You also spend quite a bit of time managing it

approving updates. Because of this you may not want it on a heavily loaded

server. A full SBS install is a heavily loaded server - Domain Controller,

Exchange, SharePoint, Web (for intranet and RWW), SQL, file server, WSUS,

ISA, and probably more I've forgotten. It needs a lot of hardware to run all

this. At a minimum you need 2GB of RAM (4 is preferred), at least two fairly

large drives mirrored (preferably more with RAID 5), and a server class CPU

(dual core Opteron or Xeon, preferably two). Given this hardware yes, all on

one box :-)

 

Of course Microsoft says it will run on a 750 MHz CPU with 512 MB of RAM and

16 GB of hard drive space. I have actually seen an IBM server configured

like this. It was delivered from IBM setup this way. It was unbelievably

unstable and slow. Even their minimum recommended system of a 1 GHz CPU

with 1 GB of RAM is woefully inadequate.

 

--

Kerry Brown

Microsoft MVP - Shell/User

http://www.vistahelp.ca

Guest cquirke (MVP Windows shell/user)
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

On Sat, 4 Aug 2007 10:14:17 -0700, "Kerry Brown"

>"cquirke (MVP Windows shell/user)" wrote in

>> On Thu, 2 Aug 2007 20:50:42 -0700, "Kerry Brown"

>>>"cquirke (MVP Windows shell/user)" wrote in

>>>> Sounds good - by "one server", do you mean a server dedicated to this

>>>> task alone, or can that be the only server you have? Will it run on

>>>> SBS, or is there a different solution for that?

>>>SBS 2003 R2 comes with WSUS out if the box.

>> Niiice... so that's one SBS box, WSUS built in!

>An SBS install is a fairly complicated procedure and takes a few tries to

>get it right the first time. WSUS is not installed by default. You have to

>install it. If you follow the instructions in the readme files it is installed.

 

OK... a bit like USBSupp.exe in Win95 SR2, or NetBEUI in XP ;-)

>WSUS does need quite a bit of resources. The SQL instance it

>uses will grow to a point where it is hogging all the free RAM if

>you don't throttle it back manually.

 

Hmmm... not just a "fat bump in the power cord" then...

>It takes a lot of disk space. You also spend quite a bit of time managing it

>approving updates. Because of this you may not want it on a heavily loaded

>server. A full SBS install is a heavily loaded server - Domain Controller,

>Exchange, SharePoint, Web (for intranet and RWW), SQL, file server, WSUS,

>ISA, and probably more. It needs a lot of hardware to run all this.

 

This is interesting, as I thought SBS was a "leaner" option compared

to formal Windows Server, but maybe not, if it has so much work to do?

>At a minimum you need 2GB of RAM (4 is preferred), at least two fairly

>large drives mirrored (preferably more with RAID 5), and a server class CPU

>(dual core Opteron or Xeon, preferably two). Given this hardware yes, all on

>one box :-)

 

Hmm... 1 x S-ATA 320G, 1G RAM, 2GHz Core 2 Duo any good? Will

boosting RAM to 2G help? Won't 4G need 64-bit?

>Of course Microsoft says it will run on a 750 MHz CPU with 512 MB of RAM and

>16 GB of hard drive space.

 

:-)

>I have actually seen an IBM server configured like this. It was delivered

>from IBM setup this way. It was unbelievably unstable and slow. Even

>their minimum recommended system of a 1 GHz CPU with 1 GB of

>RAM is woefully inadequate.

 

Interesting the RAM requirements are so high, but I guess that's a

"server thing", after all - especially as ad-hoc requests from client

PCs will be hard to predict and optimise.

 

Heh - just as off-the-peg hardware grows up to cope fairly easily with

all this, there will be a new (Longhorn) version of the OS ;-)

 

 

>--------------- ---- --- -- - - - -

"We have captured lightning and used

it to teach sand how to think."

>--------------- ---- --- -- - - - -

Guest Kerry Brown
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

"cquirke (MVP Windows shell/user)" <cquirkenews@nospam.mvps.org> wrote in

message news:mggcb3t6jjtq2kfkagnh7gmh98sm2r81c9@4ax.com...

> On Sat, 4 Aug 2007 10:14:17 -0700, "Kerry Brown"

>>"cquirke (MVP Windows shell/user)" wrote in

>>> On Thu, 2 Aug 2007 20:50:42 -0700, "Kerry Brown"

>>>>"cquirke (MVP Windows shell/user)" wrote in

>

>>>>> Sounds good - by "one server", do you mean a server dedicated to this

>>>>> task alone, or can that be the only server you have? Will it run on

>>>>> SBS, or is there a different solution for that?

>

>>>>SBS 2003 R2 comes with WSUS out if the box.

>

>>> Niiice... so that's one SBS box, WSUS built in!

>

>>An SBS install is a fairly complicated procedure and takes a few tries to

>>get it right the first time. WSUS is not installed by default. You have to

>>install it. If you follow the instructions in the readme files it is

>>installed.

>

> OK... a bit like USBSupp.exe in Win95 SR2, or NetBEUI in XP ;-)

>

>>WSUS does need quite a bit of resources. The SQL instance it

>>uses will grow to a point where it is hogging all the free RAM if

>>you don't throttle it back manually.

>

> Hmmm... not just a "fat bump in the power cord" then...

>

>>It takes a lot of disk space. You also spend quite a bit of time managing

>>it

>>approving updates. Because of this you may not want it on a heavily loaded

>>server. A full SBS install is a heavily loaded server - Domain Controller,

>>Exchange, SharePoint, Web (for intranet and RWW), SQL, file server, WSUS,

>>ISA, and probably more. It needs a lot of hardware to run all this.

>

> This is interesting, as I thought SBS was a "leaner" option compared

> to formal Windows Server, but maybe not, if it has so much work to do?

>

 

SBS is anything but lean. Until SBS was released it was the "best practice"

to have at least four or five servers to run all this.

>>At a minimum you need 2GB of RAM (4 is preferred), at least two fairly

>>large drives mirrored (preferably more with RAID 5), and a server class

>>CPU

>>(dual core Opteron or Xeon, preferably two). Given this hardware yes, all

>>on

>>one box :-)

>

> Hmm... 1 x S-ATA 320G, 1G RAM, 2GHz Core 2 Duo any good? Will

> boosting RAM to 2G help? Won't 4G need 64-bit?

>

 

SBS will run but if you have more than a couple of users it may be slow. My

server at home with only two users has a P4 1.6 GHz and 1 GB RAM. I am using

SBS 2003 SP1 with no SQL other than the two default MSDE instances and no

ISA. It is fine for two users. I wouldn't install it for a customer. A

server in a business can be a single point of failure. Because of this you

want as much redundancy as possible. I'd add a second drive as a mirror. I'd

also stay away from desktop motherboards and cases/PSU's. With most

motherboards I've used with SBS 2003 R2 and a 64 bit CPU, 4 GB of RAM shows

up as 4 GB despite the 32 bit limit. Server motherboards usually support

relocating the address space for the hardware. Even a desktop board I tested

recently showed 3.99 GB. It would be interesting to find out the technical

details but I've never bothered.

>>Of course Microsoft says it will run on a 750 MHz CPU with 512 MB of RAM

>>and

>>16 GB of hard drive space.

>

> :-)

>

>>I have actually seen an IBM server configured like this. It was delivered

>>from IBM setup this way. It was unbelievably unstable and slow. Even

>>their minimum recommended system of a 1 GHz CPU with 1 GB of

>>RAM is woefully inadequate.

>

> Interesting the RAM requirements are so high, but I guess that's a

> "server thing", after all - especially as ad-hoc requests from client

> PCs will be hard to predict and optimise.

 

It's all the "servers" that are running on one computer. Four SQL instances,

Domain Controller, Exchange, ISA, WSUS, file server, print server, etc..

>

> Heh - just as off-the-peg hardware grows up to cope fairly easily with

> all this, there will be a new (Longhorn) version of the OS ;-)

 

The Longhorn version of SBS will be 64 bit only so it will require new

hardware. I don't think the minimums have been decided on yet or at least

not announced publicly but I expect they are much higher :-)

 

Don't get me wrong. I really like SBS and recommend it for business' as

small as four or five users. It is however a real server and needs real

server equipment to work properly. Note this needn't be drastically

expensive. I can build a decent server for less than $1,500 CDN for the

hardware. I can build a server that will run SBS right up to the max number

of users for less than $2,500 CDN.

 

--

Kerry Brown

Microsoft MVP - Shell/User

http://www.vistahelp.ca

Guest Leythos
Posted

Re: It would be nice if MS could settingle on a single subnet for updates

 

In article <#CHW67E2HHA.5164@TK2MSFTNGP05.phx.gbl>, kerry@kdbNOSPAMsys-

tems.c*a*m says...

> I can build a server that will run SBS right up to the max number

> of users for less than $2,500 CDN.

 

LOL - and when used by 70-75 users, that $2500 server, with users that

hit the SQL database hard, have tons of email, etc... will crawl and

they will complain non-stop - at least if they've ever used anything

fast :)

 

I've got customers, about 40 with SBS 2003 Prem, and Dual CPU, 5xSATA,

4GB RAM, LTO-2 or DAT-72 tape min, and Dual 550W PSU units is going to

run a little more than $2500 in most all cases :)

 

--

 

Leythos

- Igitur qui desiderat pacem, praeparet bellum.

- Calling an illegal alien an "undocumented worker" is like calling a

drug dealer an "unlicensed pharmacist"

spam999free@rrohio.com (remove 999 for proper email address)

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...