About | Buy | News | Products | Rants | Search | Security
Home » Security

Microsoft to 3rd Party: 'Fix Your Bugs!'

'Kettle to pot! Kettle to pot! Come in, please!'

Get It

Try It

REDMOND (Radsoft) — Microsoft brought out a new security policy for their online stores that threatens developers with exclusion if security bugs aren't fixed within 'a maximum of 180 days'.

Talk about the pot calling the kettle black.

Reading Between the Lines

The policy change seems great on a superficial level. The official statement is filled with self-praise.

The new policy is part of a Microsoft effort to help ensure that customers can have confidence in the security of the software that is available in our online stores. This confidence includes trusting that developers will respond appropriately when a security vulnerability is discovered. Microsoft has a long history of working with third-party developers and researchers to resolve security vulnerabilities.

Now let's show people how well we work, what we look like in action.

When Microsoft researchers find vulnerabilities in apps, we work directly with app developers through the Microsoft Vulnerability Research program. So far, we have had excellent cooperation from developers in fixing vulnerabilities in their programs.

The first innocent question is why the former IT giant should waste so many resources fixing bugs in third party software.

Now to the terms of the new security policy.

Under the policy, developers will have a maximum of 180 days to submit an updated app for security vulnerabilities that are not under active attack and are rated Critical or Important according to the Microsoft Security Response Center rating system.

And that's where the alarm bells should start ringing. What's 'Critical'? What's 'Important'? (Aren't all bug fixes important?)

The definitions of the Severity ratings are:

CriticalA vulnerability whose exploitation could allow code execution without user interaction. These scenarios include self-propagating malware (eg network worms) or unavoidable common use scenarios where code execution occurs without warnings or prompts. This could mean browsing to a web page or opening email.
Important  A vulnerability whose exploitation could result in compromise of the confidentiality, integrity, or availability of user data, or of the integrity or availability of processing resources. These scenarios include common use scenarios where client is compromised with warnings or prompts regardless of the prompt's provenance, quality, or usability. Sequences of user actions that do not generate prompts or warnings are also covered.

There are so many sirens screaming it's not funny. What Microsoft are talking about here are 'bugs' in third party userland code that can make a system vulnerable to an attack by merely visiting a web page or opening (or previewing) an email message.

The difference between 'Critical' and 'Important' is not relevant here. To an ordinary user, this all seems pretty good and even admirable. Microsoft are clamping down, they want to improve security for all their devoted fans, this can only be good, right?

But to anyone with an ordinary understanding of how operating systems work, this is another slap in the face for Microsoft users.

How Operating Systems Work

Windows is an operating system. It's what makes the computer operate. The total picture is that there are (hardware) devices in the computer, devices such as your computer screen, your keyboard, your pointing device, your ports, etc. These are all handled by code known as device drivers. This is very low-level code.

Then you need code to interact with those drivers, because ordinary code (defined later) can't access them. This new chunk of code is known as the kernel. The kernel does a lot more things too - such as coordinating all your running programs. And it of course takes commands from your applications (such as 'open file', 'save file') and passes them onto the drivers.

Your applications - the code that Microsoft's new security policy targets - can't touch the drivers or low level code.

Something's Wrong

It's clear to the system engineer that something must be wrong with the system if you the user can - in any way at all - issue commands at your user level to an application that can compromise the operating system as a whole.

Systems are built on levels of privilege. You the ordinary user cannot directly access the more powerful code. You the user - using a 'safe' operating system - shouldn't be able under any circumstances to compromise your system.

The thought that a 'safe' operating system should be vulnerable to simply visiting websites or viewing email is preposterous.

But that's the way Microsoft systems work. They work that way because they're not built right.

The reason Microsoft work so closely with third party developers to fix bugs is that those bugs make Microsoft look bad.

Windows can't be fixed. It has no security model. It's a mishmash, a patchwork of hysterical ad hoc coding. Anyone familiar with the architecture of Windows and the history of Microsoft knows this.

You the user deserve better. You the user should demand better.

See Also
Microsoft: New Security Policy for Store Apps

About | Buy | News | Products | Rants | Search | Security
Copyright © Radsoft. All rights reserved.