Radsoft
 About | Buy | News | Products | Rants | Search | Security
Home » Resources » Sargon

Sargon Speaks

The greatest ruler who ever lived, writing exclusively for Radsoft. Write to Sargon at sargon@radsoft.net.

Wednesday 29 January 2003 - SQL-Slammer I

The 'SQL Slammer' worm (as it has come to be called) is a worm which is based on code originally developed by the Hacker Union of China. This group has been active in anti-American, pro-Chinese hacking activities in the past few years. There is NO evidence that this group had anything to do with the release of SQL Slammer. The code on which the worm is based has been widely available for some time; thus, anyone could have modified that code and produced SQL Slammer.

The worm contained NO trojan/backdoor code. It exploited a flaw in Microsoft's SQL Server 2000, causing a buffer overrun. This flaw was patched by Microsoft in July of last year. The buffer overflow exploits the way SQL Server 2000 improperly handles data sent to the Microsoft SQL Monitor port. After over-running the memory buffer, the attacker is able to run code as SYSTEM, since SQL Server 2000 runs as SYSTEM (and thus has SYSTEM privileges).

After causing the buffer overflow, SQL Slammer begins to generate pseudo-random IP addresses. It targets these addresses in an attempt to infect servers with the worm. The payload is very small (376 bytes) and can thus be passed very quickly (in fact, in the first packet sent) via UDP to port 1434. The worm did indeed spread VERY quickly, creating, in effect, denial-of-service attacks against infected networks. One aspect of the pseudo-random generator is that it occasionally got stuck (lousy programming), leading to the same network getting attacked hundreds of times. There was also no bandwidth-control mechanism, resulting in outbound bandwidth becoming almost immediately exhausted.

Once networks were inundated, a flood of ICMP Host/Port Unreachable messages was, of course sent out across the Internet, further complicating matters (and confusing quite a few unseasoned network engineers).

The worm hit sites around the world beginning around 0500 UTC Saturday 25-Jan. Within 30 minutes, backbones around the world were inundated. A graph showing the effect of the worm on BGP traffic (BGP is ***the*** protocol used to route traffic around the Internet; if BGP goes haywire, the Internet goes down) is found here.

http://www.research.att.com/~griffin/bgp_monitor/sql_worm.html

These two graphs show the number of routes (or, oversimplified, networks attached to the Internet) the 'big seven' backbones normally see (or know about). As they indicate, at approximately 0530 UTC the backbones stopped seeing thousands of routes. The second graph shows the 10-minutes time frame from 0525 until 0535.

This tremendous loss of routes has two effects. The first (and obvious) one is that parts of the Internet 'fall of the face of the earth.' The second one, not so obvious to the typical end-user but every bit as devastating in impact, is how the loss of those routes affects BGP and the routers which run BGP. Whenever a router discovers that a route is no longer available, it tells all the other routers to which it is connected that this route is gone. The routers must then recalculate their BGP routing tables, a process which can become very CPU-intensive. Thus a router, already straining under the tremendous traffic load caused by SQL Slammer, must know slow down even more as it tried to recalculate BGP tables. But it must now do this MUCH more often than normal, since routes are dropping at an alarming rate. The end result wasn't pretty. A graph of BGP updates seen by MIT's routers can be found here.

http://nms.lcs.mit.edu/~dga/sqlworm.html

What is interesting, based on the information in the two graphs here, is that SQL Slammer hit harder and faster than the infamous Nimda virus.

One entire country, South Korea, disappeared completely from the Internet. Southeast Asia was hammered. The seven major backbones were on their knees. One backbone engineer reported to NANOG this note:

'dual gig pipes, each with sustained 780mbps... from one facility, 1.5+gbps sustained!!!'

That is an AMAZING statistic: continuous traffic equalling over 1 1/2 gigabits of traffic. From one single data center which hosted customer servers. Incredible.

Southern Californians had almost no Internet access for parts of Saturday. The largest residential mortgage company in the U.S., Countrywide Financial Corp., was still down on Monday morning. The 911 emergency center serving parts of suburban Seattle was knocked out for several hours Saturday. Bank of America's automated-teller machines were knocked offline. Even Microsoft was hit. See this link for some revealing, leaked memos detailing that the entire internal network died.

http://www.theregister.co.uk/content/56/29073.html

The list goes on and on and on.

Network engineers at the major backbones quickly implemented filters in their routers to stop the traffic. Filtering, however, does introduce another set of problems. Routers, already struggling under the traffic load and the BGP load, were now required to check EVERY packet to see if it should be filtered. There were numerous reports of routers dying under the load.

Once filters were applied at egress points (usually data centers, where servers would be located), the problems across backbones improved, since the traffic was now being dropped before it was able to cause further problems. However, until the infected servers were patched (and many still aren't patched), legitimate traffic headed to data centers would be slow (and, in many cases, still is slow).

By Sunday the finger-pointing had begun. Certainly server administrators who had not patched their servers (two patches, one a Windows 2000 service pack, the other a service pack for SQL Server 2000) are to be faulted. But I have lots of questions for other people as well.

Several server admins have told me that they regularly run Windows Update but were baffled as to why their servers were infected. The reason is simple: Windows Update does NOT check for updates to additional software installed on top of Windows. This means that SQL Server, Exchange Server, etc. are NOT patched by Windows Update. And these admins NEVER knew this. Should someone point a finger at Microsoft for never making this clear? Should someone at Microsoft have taken the time to make Windows Update look at more than the operating system? Hmmmmmmmm. If Microsoft can't keep its own servers at proper patch levels, how can the company expect anyone else to keep servers patched?

Further, there are LOTS of companies which need to take some long looks at their network-design and network-security departments. I am amazed that anyone would put mission-critical database servers in DMZs and allow them to talk to anyone on the Internet. Database servers are back-end servers; they should ONLY talk to machines making queries (e.g., web servers acting as front ends). And filters should have been placed in firewalls and routers so that nothing from the Internet could talk to these SQL servers.

It is interesting that Bank of America lost its entire automated-teller-machine network. There are several questions being asked about the design of this network (does BofA really use VPNs to allow ATMs to talk back to SQL servers in data centers across the Internet, as opposed to the traditional practice of using data circuits to link ATMs to data centers?), but the point to be made is this: the current design is horribly insecure. This leads to a further point.

It is remarkable that, at least at this point, no data has been stolen from these servers. If the programmer of this worm had written something more malicious, the potential for devastating loss is mind-boggling.

A further note: SQL Server 2000 is not the only Microsoft product affected by this flaw. Products dependent on the Microsoft Desktop Engine (commonly called MSDE) are also affected. Several Microsoft products fit into this category; a list can be found here.

http://www.microsoft.com/technet/treeview/default.asp?url=/technet/security/MSDEapps.asp

Numerous non-Microsoft products are affected as well, among them these:

Compaq Insight Manager
Crystal Reports Enterprise
Dell OpenManage
HP Openview Internet Services Monitor
McAfee Centralized Virus Admin
McAfee Epolicy Orchestrator
Trend Micro Damage Cleanup Server
Websense Reporter
Veritas Backup Exec
WebBoard Conferencing Server
ISS RealSecure 7.0
ISS Internet Scanner

A complete list of products which run MSDE and may need to be patched can be found here:

http://www.sqlsecurity.com/DesktopDefault.aspx?tabid=13

Next: SQL-Slammer II >>

About | Buy | News | Products | Rants | Search | Security
Copyright © Radsoft. All rights reserved.