Discussion:
thoughts on kernel security issues
(too old to reply)
Chris Wright
2005-01-12 17:48:07 UTC
Permalink
This same discussion is taking place in a few forums. Are you opposed to
creating a security contact point for the kernel for people to contact
with potential security issues? This is standard operating procedure
for many projects and complies with RFPolicy.

http://www.wiretrip.net/rfp/policy.html

Right now most things come in via 1) lkml, 2) maintainers, 3) vendor-sec.
It would be nice to have a more centralized place for all of this
information to help track it, make sure things don't fall through
the cracks, and make sure of timely fix and disclosure.

In addition, I think it's worth considering keeping the current stable
kernel version moving forward (point releases ala 2.6.x.y) for critical
(mostly security) bugs. If nothing else, I can provide a subset of -ac
patches that are only that.

I volunteer to help with _all_ of the above. It's what I'm here for.
Use me, abuse me ;-)

thanks,
-chris

===== MAINTAINERS 1.269 vs edited =====
--- 1.269/MAINTAINERS 2005-01-10 17:29:35 -08:00
+++ edited/MAINTAINERS 2005-01-11 13:29:23 -08:00
@@ -1959,6 +1959,11 @@ M: ***@weinigel.se
W: http://www.weinigel.se
S: Supported

+SECURITY CONTACT
+P: Security Officers
+M: kernel-security@{vger.kernel.org, osdl.org, wherever}
+S: Supported
+
SELINUX SECURITY MODULE
P: Stephen Smalley
M: ***@epoch.ncsc.mil
===== REPORTING-BUGS 1.2 vs edited =====
--- 1.2/REPORTING-BUGS 2002-02-04 23:39:13 -08:00
+++ edited/REPORTING-BUGS 2005-01-10 15:35:10 -08:00
@@ -16,6 +16,9 @@ code relevant to what you were doing. If
describe how to recreate it. That is worth even more than the oops itself.
The list of maintainers is in the MAINTAINERS file in this directory.

+ If it is a security bug, please copy the Security Contact listed
+in the MAINTAINERS file. They can help coordinate bugfix and disclosure.
+
If you are totally stumped as to whom to send the report, send it to
linux-***@vger.kernel.org. (For more information on the linux-kernel
mailing list see http://www.tux.org/lkml/).
Linus Torvalds
2005-01-12 18:05:34 UTC
Permalink
Post by Chris Wright
This same discussion is taking place in a few forums. Are you opposed to
creating a security contact point for the kernel for people to contact
with potential security issues? This is standard operating procedure
for many projects and complies with RFPolicy.
I wouldn't mind, and it sounds like a good thing to have. The _only_
requirement that I have is that there be no stupid embargo on the list.
Any list with a time limit (vendor-sec) I will not have anything to do
with.

If that means that you can get only the list by invitation-only, that's
fine.

Linus
Chris Wright
2005-01-12 18:44:07 UTC
Permalink
Post by Linus Torvalds
Post by Chris Wright
This same discussion is taking place in a few forums. Are you opposed to
creating a security contact point for the kernel for people to contact
with potential security issues? This is standard operating procedure
for many projects and complies with RFPolicy.
I wouldn't mind, and it sounds like a good thing to have. The _only_
requirement that I have is that there be no stupid embargo on the list.
Any list with a time limit (vendor-sec) I will not have anything to do
with.
Right, I know you don't like the embargo stuff.
Post by Linus Torvalds
If that means that you can get only the list by invitation-only, that's
fine.
Opinions on where to set it up? vger, osdl, ...?

thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
Linus Torvalds
2005-01-12 18:57:25 UTC
Permalink
Post by Chris Wright
Right, I know you don't like the embargo stuff.
I'd very happy with a "private" list in the sense that people wouldn't
feel pressured to fix it that day, and I think it makes sense to have some
policy where we don't necessarily make them public immediately in order to
give people the time to discuss them.

But it should be very clear that no entity (neither the reporter nor any
particular vendor/developer) can require silence, or ask for anything more
than "let's find the right solution". A purely _technical_ delay, in other
words, with no politics or other issues involved.

Otherwise it just becomes politics: you end up having security firms that
want a certain date because they want a PR blitz, and you end up having
vendors who want a certain date because they have release issues.

Does that mean that vendor-sec would end up being used for some things,
where people _want_ the politics and jockeying for position? Probably.
But having a purely technical alternative would be wonderful.
Post by Chris Wright
Post by Linus Torvalds
If that means that you can get only the list by invitation-only, that's
fine.
Opinions on where to set it up? vger, osdl, ...?
I don't personally think it matters. Especially if we make it very clear
that it's purely technical, and no vendor politics can enter into it.
Whatever ends up being easiest.

Linus
Chris Wright
2005-01-12 19:21:37 UTC
Permalink
Post by Linus Torvalds
Post by Chris Wright
Right, I know you don't like the embargo stuff.
I'd very happy with a "private" list in the sense that people wouldn't
feel pressured to fix it that day, and I think it makes sense to have some
policy where we don't necessarily make them public immediately in order to
give people the time to discuss them.
That's what I figured you meant.
Post by Linus Torvalds
But it should be very clear that no entity (neither the reporter nor any
particular vendor/developer) can require silence, or ask for anything more
than "let's find the right solution". A purely _technical_ delay, in other
words, with no politics or other issues involved.
Agreed.
Post by Linus Torvalds
Otherwise it just becomes politics: you end up having security firms that
want a certain date because they want a PR blitz, and you end up having
vendors who want a certain date because they have release issues.
There is value in coordinating with vendors, namely to keep them from
being caught with pants down. But vendor-sec already does this part
well enough.
Post by Linus Torvalds
Does that mean that vendor-sec would end up being used for some things,
where people _want_ the politics and jockeying for position? Probably.
But having a purely technical alternative would be wonderful.
Post by Chris Wright
Post by Linus Torvalds
If that means that you can get only the list by invitation-only, that's
fine.
Opinions on where to set it up? vger, osdl, ...?
I don't personally think it matters. Especially if we make it very clear
that it's purely technical, and no vendor politics can enter into it.
Whatever ends up being easiest.
Well, easiest for me is here ;-)

thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
Jesper Juhl
2005-01-12 20:59:53 UTC
Permalink
Post by Linus Torvalds
Post by Chris Wright
Right, I know you don't like the embargo stuff.
I'd very happy with a "private" list in the sense that people wouldn't
feel pressured to fix it that day, and I think it makes sense to have some
policy where we don't necessarily make them public immediately in order to
give people the time to discuss them.
But it should be very clear that no entity (neither the reporter nor any
particular vendor/developer) can require silence, or ask for anything more
than "let's find the right solution". A purely _technical_ delay, in other
words, with no politics or other issues involved.
Being firmly in the full disclosure camp I hope you intend to stick to
that "no entity (neither the reporter nor any particular vendor/developer)
can require silence" bit. If you do, and if embargoes are kept to short
nr. of days, then I think such a list would probably be a good idea. It
would be a good compromise between full disclosure from day one and things
being kept secret and out of view for months.


Just my 0.02euro.
--
Jesper Juhl
Greg KH
2005-01-12 21:27:16 UTC
Permalink
Post by Linus Torvalds
Post by Chris Wright
Opinions on where to set it up? vger, osdl, ...?
I don't personally think it matters. Especially if we make it very clear
that it's purely technical, and no vendor politics can enter into it.
I think vger fits that bill, if for no other reason than to keep the
"osdl is taking over the kernel" rumors at bay :)

That is, if the vger postmasters agree.

thanks,

greg k-h
Greg KH
2005-01-12 18:51:33 UTC
Permalink
Post by Linus Torvalds
Post by Chris Wright
This same discussion is taking place in a few forums. Are you opposed to
creating a security contact point for the kernel for people to contact
with potential security issues? This is standard operating procedure
for many projects and complies with RFPolicy.
I wouldn't mind, and it sounds like a good thing to have. The _only_
requirement that I have is that there be no stupid embargo on the list.
Any list with a time limit (vendor-sec) I will not have anything to do
with.
If that means that you can get only the list by invitation-only, that's
fine.
So you would be for a closed list, but there would be no incentive at
all for anyone on the list to keep the contents of what was posted to
the list closed at any time? That goes against the above stated goal of
complying with RFPolicy.

I understand your dislike of having to wait once you know of a security
issue before making the fix public, but how should distros coordinate
fixes in any other way?

thanks,

greg k-h
Linus Torvalds
2005-01-12 19:01:42 UTC
Permalink
Post by Greg KH
So you would be for a closed list, but there would be no incentive at
all for anyone on the list to keep the contents of what was posted to
the list closed at any time? That goes against the above stated goal of
complying with RFPolicy.
There's already vendor-sec. I assume they follow RFPolicy already. If it's
just another vendor-sec, why would you put up a new list for it?

In other words, if you allow embargoes and vendor politics, what would the
new list buy that isn't already in vendor-sec.

When I saw how vendor-sec worked, I decided I will never be on an embargo
list. Ever. That's not to say that such a list can't work - I just
personally refuse to have anything to do with one. Whether that matters or
not is obviously an open question.

Linus
Greg KH
2005-01-12 19:18:14 UTC
Permalink
Post by Linus Torvalds
Post by Greg KH
So you would be for a closed list, but there would be no incentive at
all for anyone on the list to keep the contents of what was posted to
the list closed at any time? That goes against the above stated goal of
complying with RFPolicy.
There's already vendor-sec. I assume they follow RFPolicy already. If it's
just another vendor-sec, why would you put up a new list for it?
I think the issue is that there is no main "security" contact for the
kernel. If we want to make vendor-sec that contact, fine, but we better
warn the vendor-sec people :)
Post by Linus Torvalds
In other words, if you allow embargoes and vendor politics, what would the
new list buy that isn't already in vendor-sec.
vendor-sec handles a lot of other stuff that is not kernel related
(every package that is in a distro.) This would only be for the kernel.

thanks,

greg k-h
Chris Wright
2005-01-12 19:38:20 UTC
Permalink
Post by Greg KH
Post by Linus Torvalds
Post by Greg KH
So you would be for a closed list, but there would be no incentive at
all for anyone on the list to keep the contents of what was posted to
the list closed at any time? That goes against the above stated goal of
complying with RFPolicy.
There's already vendor-sec. I assume they follow RFPolicy already. If it's
just another vendor-sec, why would you put up a new list for it?
I think the issue is that there is no main "security" contact for the
kernel. If we want to make vendor-sec that contact, fine, but we better
warn the vendor-sec people :)
Yes. And I think we should have our own contact.
Post by Greg KH
Post by Linus Torvalds
In other words, if you allow embargoes and vendor politics, what would the
new list buy that isn't already in vendor-sec.
vendor-sec handles a lot of other stuff that is not kernel related
(every package that is in a distro.) This would only be for the kernel.
Yes, and IMO, it could inform vendor-sec.

thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
Florian Weimer
2005-01-12 19:41:23 UTC
Permalink
Post by Greg KH
Post by Linus Torvalds
In other words, if you allow embargoes and vendor politics, what would the
new list buy that isn't already in vendor-sec.
vendor-sec handles a lot of other stuff that is not kernel related
(every package that is in a distro.) This would only be for the kernel.
I don't know that much about vendor-sec, but wouldn't the kernel list
contain roughly the same set of people? vendor-sec also has people
from the *BSDs, I believe, but they should probably notified of Linux
issues as well (often, similar mistakes are made in different
implementations).

If the readership is the same, it doesn't make sense to run two lists,
especially because it's not a normal list and you have to be capable
to deal with the vetting.

I agree that embargoed lists are nasty, but sometimes, you have to
make personal sacrifices to further the cause. 8-(
Chris Wright
2005-01-12 23:10:38 UTC
Permalink
Post by Florian Weimer
Post by Greg KH
Post by Linus Torvalds
In other words, if you allow embargoes and vendor politics, what would the
new list buy that isn't already in vendor-sec.
vendor-sec handles a lot of other stuff that is not kernel related
(every package that is in a distro.) This would only be for the kernel.
I don't know that much about vendor-sec, but wouldn't the kernel list
contain roughly the same set of people?
No.
Post by Florian Weimer
vendor-sec also has people
from the *BSDs, I believe, but they should probably notified of Linux
issues as well (often, similar mistakes are made in different
implementations).
Take a look at <http://www.freebsd.org/security/index.html>. Pretty
good description. It's normal for projects to have their own security
contact to handle security issues. Once it's vetted, understood,
etc...it's normal to give vendors some heads-up.
Post by Florian Weimer
If the readership is the same, it doesn't make sense to run two lists,
especially because it's not a normal list and you have to be capable
to deal with the vetting.
It's not the same readership.

thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
Marcelo Tosatti
2005-01-12 16:12:27 UTC
Permalink
Post by Linus Torvalds
Post by Greg KH
So you would be for a closed list, but there would be no incentive at
all for anyone on the list to keep the contents of what was posted to
the list closed at any time? That goes against the above stated goal of
complying with RFPolicy.
There's already vendor-sec. I assume they follow RFPolicy already. If it's
just another vendor-sec, why would you put up a new list for it?
In other words, if you allow embargoes and vendor politics, what would the
new list buy that isn't already in vendor-sec.
When I saw how vendor-sec worked, I decided I will never be on an embargo
list. Ever. That's not to say that such a list can't work - I just
personally refuse to have anything to do with one. Whether that matters or
not is obviously an open question.
Of course it matters Linus - vendors need time to prepare their updates. You
can't ignore that, and you can't "have nothing to do with it".

You seem to dislike the way embargos have been done on vendorsec, fine. They can
be done on a different way, but you have to understand that you and Andrew
need to follow and agree with the embargo.

How you feel about having short fixed time embargo's (lets say, 3 or 4 days) ?

The only reason for this is to have "time for the vendors to catch up", which
can be defined by the kernel security office. Nothing more - no vendor politics
involved.

It is a simple matter of synchronization.
Linus Torvalds
2005-01-12 20:00:52 UTC
Permalink
Post by Marcelo Tosatti
How you feel about having short fixed time embargo's (lets say, 3 or 4 days) ?
Please realize that I don't have any problem with a short-term embargo per
se, what I have problems with is the _politics_ that it causes. For
example, I do _not_ want this to become a

"vendor-sec got the information five weeks ago, and decided to embargo
until day X, and then because they knew of the 4-day policy of the
kernel security list, they released it to the kernel security list on
day X-4"

See? That is playing politics with a security list. That's the part I
don't want to have anything to do with. If somebody did that to me, I'd
feel pissed off like hell, and I'd say "screw them".

But in the absense of politics, I'd _happily_ have a self-imposed embargo
that is limited to some reasonable timeframe (and "reasonable" is
definitely counted in days, not weeks. And absolutely _not_ in months,
like apparently sometimes happens on vendor-sec).

So if the embargo time starts ticking from _first_ report, I'd personally
be perfectly happy with a policy of, say "5 working days" (aka one week),
or until it was made public somewhere else.

IOW, if it was released on vendor-sec first, vendor-sec could _not_ then
try to time the technical list (unless they do so in a very timely manner
indeed).

I'm not saying that we'd _have_ to go public after five days. I'm saying
that after that, there would be nothing holding it back (but maybe the
technical discussion on how to _fix_ it is still on-going, and that might
make people just not announce it until they're ready).

Linus
Linus Torvalds
2005-01-12 20:28:14 UTC
Permalink
Post by Linus Torvalds
So if the embargo time starts ticking from _first_ report, I'd personally
be perfectly happy with a policy of, say "5 working days" (aka one week),
or until it was made public somewhere else.
Btw, the only thing I care about is the embargo on the _fix_.

If a bug reporter is a security house, and wants to put a longer embargo
on announcing the bug itself, or on some other aspect of the issue (ie
known exploits etc), and wants to make sure that they get the credit and
they get to be the first ones to announce the problem, that's fine by me.

The only thing I really care about is that we can serve the people who
depend on us by giving them source code that is as bug-free and secure as
we can make it. If that means that we should make the changelogs be a bit
less verbose because we don't want to steal the thunder from the people
who found the problem, that's fine.

One of the problems with the embargo thing has been exactly the fact that
people couldn't even find bugs (or just uglinesses) in the fixes, because
they were kept under wraps until the "proper date".

Linus
Marcelo Tosatti
2005-01-12 18:03:40 UTC
Permalink
Post by Linus Torvalds
Post by Linus Torvalds
So if the embargo time starts ticking from _first_ report, I'd personally
be perfectly happy with a policy of, say "5 working days" (aka one week),
or until it was made public somewhere else.
Btw, the only thing I care about is the embargo on the _fix_.
If a bug reporter is a security house, and wants to put a longer embargo
on announcing the bug itself, or on some other aspect of the issue (ie
known exploits etc), and wants to make sure that they get the credit and
they get to be the first ones to announce the problem, that's fine by me.
The only thing I really care about is that we can serve the people who
depend on us by giving them source code that is as bug-free and secure as
we can make it. If that means that we should make the changelogs be a bit
less verbose because we don't want to steal the thunder from the people
who found the problem, that's fine.
I'm not a big fan of hiding security fixes - having a defined and clear
list of security issues is important. Moreover, the code itself is verbose
enough for some people.

If you release the code earlier than the embargo date, even with "non verbose
changelogs", to best serve the people who depend on us by giving them source
code that is as bug-free and secure as possible, you make the issue public.

IMO the best thing is to be very verbose about security problems - giving
credit to the people who deserve it accordingly (not stealing the thunder
from the discovers, but rather making more visible on the changelog who
they are).

The KSO (Kernel Security Officer, the new buzzword on the block) has to
control the embargo date and be strict about it.
Post by Linus Torvalds
One of the problems with the embargo thing has been exactly the fact that
people couldn't even find bugs (or just uglinesses) in the fixes, because
they were kept under wraps until the "proper date".
Exactly, and keeping under wraps means "obscure, unclear list of security issues".
We want the other way around.
Christian
2005-01-13 03:18:32 UTC
Permalink
Post by Linus Torvalds
we can make it. If that means that we should make the changelogs be a bit
less verbose because we don't want to steal the thunder from the people
who found the problem, that's fine.
what the....no!!

changelogs have to be verbose, i'm still often missing hints in the
current changelogs, commenting that patch_a and update_b got in because
of a security issue. some boxes need only be updated for the sake of
security, so one would be happy only watching for <security patch> lines
in the kernel changlogs. giving credits to the people who found the
problem is still possible by mentioning the (source of the)original
advisory.

Christian.
--
BOFH excuse #30:

positron router malfunction
Chris Wright
2005-01-12 20:27:11 UTC
Permalink
Post by Linus Torvalds
Post by Marcelo Tosatti
How you feel about having short fixed time embargo's (lets say, 3 or 4 days) ?
Please realize that I don't have any problem with a short-term embargo per
se, what I have problems with is the _politics_ that it causes. For
example, I do _not_ want this to become a
"vendor-sec got the information five weeks ago, and decided to embargo
until day X, and then because they knew of the 4-day policy of the
kernel security list, they released it to the kernel security list on
day X-4"
I agree, and in most of these cases long delay are due to things
falling through cracks or not getting adequate cycles. Not so much
politics.
Post by Linus Torvalds
See? That is playing politics with a security list. That's the part I
don't want to have anything to do with. If somebody did that to me, I'd
feel pissed off like hell, and I'd say "screw them".
But in the absense of politics, I'd _happily_ have a self-imposed embargo
that is limited to some reasonable timeframe (and "reasonable" is
definitely counted in days, not weeks. And absolutely _not_ in months,
like apparently sometimes happens on vendor-sec).
So if the embargo time starts ticking from _first_ report, I'd personally
be perfectly happy with a policy of, say "5 working days" (aka one week),
or until it was made public somewhere else.
That's more or less my take. Timely response to reporter, timely
debugging/bug fixing and timely disclosure.
Post by Linus Torvalds
IOW, if it was released on vendor-sec first, vendor-sec could _not_ then
try to time the technical list (unless they do so in a very timely manner
indeed).
What about the reverse, and informing vendors? This is typical...project
security contact gets report, figures out bug, works with vendor-sec on
release date. In my experience, the long cycles rarely come from that
final negotiation. It's usually not much of a negotiation, rather a
"heads-up", "thanks".

The two goals: 1) timely response, fix, dislosure; and 2) not leaving
vendors with pants down; don't have to be mutually exclusive.

thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
Greg KH
2005-01-12 20:57:17 UTC
Permalink
Post by Chris Wright
Post by Linus Torvalds
But in the absense of politics, I'd _happily_ have a self-imposed embargo
that is limited to some reasonable timeframe (and "reasonable" is
definitely counted in days, not weeks. And absolutely _not_ in months,
like apparently sometimes happens on vendor-sec).
So if the embargo time starts ticking from _first_ report, I'd personally
be perfectly happy with a policy of, say "5 working days" (aka one week),
or until it was made public somewhere else.
That's more or less my take. Timely response to reporter, timely
debugging/bug fixing and timely disclosure.
That sounds sane to me too.
Post by Chris Wright
Post by Linus Torvalds
IOW, if it was released on vendor-sec first, vendor-sec could _not_ then
try to time the technical list (unless they do so in a very timely manner
indeed).
What about the reverse, and informing vendors? This is typical...project
security contact gets report, figures out bug, works with vendor-sec on
release date. In my experience, the long cycles rarely come from that
final negotiation. It's usually not much of a negotiation, rather a
"heads-up", "thanks".
Vendors should also cc: the kernel-security list/contact at the same
time they would normally contact vendor-sec. I don't see a problem with
that happening, and would help out the people on vendor-sec from having
to wade through a lot of linux kernel specific stuff at times.
Post by Chris Wright
The two goals: 1) timely response, fix, dislosure; and 2) not leaving
vendors with pants down; don't have to be mutually exclusive.
I agree, having pants down when you don't want them to be isn't a good
thing :)

thanks,

greg k-h
Alan Cox
2005-01-13 15:36:24 UTC
Permalink
Post by Greg KH
Vendors should also cc: the kernel-security list/contact at the same
time they would normally contact vendor-sec. I don't see a problem with
that happening, and would help out the people on vendor-sec from having
to wade through a lot of linux kernel specific stuff at times.
vendor-sec has no control over dates or who else gets to know. We can
ask people to also notify others, we can suggest dates to people but
that is all. So if you think 7 days is sensible when reporting a hole
specify you will be making it public in 7 days.

If vendor-sec ignores a request for example that the bug doesn't go
public until date X then we just don't get told in future and we get
more 0 day crap
Andrea Arcangeli
2005-01-12 21:20:31 UTC
Permalink
Post by Chris Wright
The two goals: 1) timely response, fix, dislosure; and 2) not leaving
vendors with pants down; don't have to be mutually exclusive.
All vendors are normally ready way before the end of the embargo.
I would suggest the slowest of all vendors will enforce the date (i.e.
all vendors propose a date, and the longest one will be choosen like a
reverse auction, the worst offer wins), with a maximum delay of 1 month
(or whatever else). To guarantee everyone will go as fast as possible
the date proposed by every different vendor can be published in the
final report. Just keeping in mind that the more archs involved, the
more kernels have to be built and the slower will be a vendor. So a
difference of a few days just to build and test everything is very
reasonable and not significant, but this will avoid differences of
Post by Chris Wright
1week and it'll avoid the unnecessary delays when everybody is ready to
publish but nobody can (which personally is the only thing that would
annoy me if I were a customer). This will also raise the attention and
it'll increase the stress to get things done ASAP since there'll be a
reward. Nothing gets done if there's no reward.
Dave Jones
2005-01-12 20:53:50 UTC
Permalink
Post by Linus Torvalds
Post by Marcelo Tosatti
How you feel about having short fixed time embargo's (lets say, 3 or 4 days) ?
Please realize that I don't have any problem with a short-term embargo per
se, what I have problems with is the _politics_ that it causes. For
example, I do _not_ want this to become a
"vendor-sec got the information five weeks ago, and decided to embargo
until day X, and then because they knew of the 4-day policy of the
kernel security list, they released it to the kernel security list on
day X-4"
See? That is playing politics with a security list. That's the part I
don't want to have anything to do with. If somebody did that to me, I'd
feel pissed off like hell, and I'd say "screw them".
Who would be on the kernel security list if it's to be invite only ?
Is this just going to be a handful of folks, or do you foresee it
being the same kernel folks that are currently on vendor-sec ?

My first thought was 'Chris will forward the output of ***@kernel.org
to vendor-sec, and we'll get a chance to get updates built'. But you
seem dead-set against any form of delayed disclosure, which has the
effect of catching us all with our pants down when you push out
a new kernel fixing a hole and we don't have updates ready.

At this time, those with bad intents rub their hands with glee
0wning boxes at will whilst those of us responsible for vendor
kernels run like headless chickens trying to get updates out,
which can be a pain the ass if $vendor is supporting some ancient
release which is afflicted by the same bug.

If you turned the current model upsidedown and vendor-sec learned
about issues from ***@kernel.org a few days before it'd at
least give us *some* time, as opposed to just springing stuff
on us without warning.

Thoughts?

Dave
Greg KH
2005-01-12 20:59:54 UTC
Permalink
Post by Dave Jones
If you turned the current model upsidedown and vendor-sec learned
least give us *some* time, as opposed to just springing stuff
on us without warning.
I think having security@ notify vendor-sec when it finds a real problem
would be a good idea, as a lot of stuff is just sifting through finding
the root cause and fix. And if security@ still has it's "5 day
countdown" type thing, that still gives you (and me) at least a few days
to run around like mad to update things, which is better than nothing :)

thanks,

greg k-h
Linus Torvalds
2005-01-13 02:09:31 UTC
Permalink
Post by Dave Jones
Who would be on the kernel security list if it's to be invite only ?
Is this just going to be a handful of folks, or do you foresee it
being the same kernel folks that are currently on vendor-sec ?
I'd assume that it's as many as possible. The current vendor-sec kernel
people _plus_ anybody else who wants to.
Post by Dave Jones
to vendor-sec, and we'll get a chance to get updates built'. But you
seem dead-set against any form of delayed disclosure, which has the
effect of catching us all with our pants down when you push out
a new kernel fixing a hole and we don't have updates ready.
Yes, I think delayed disclosure is broken. I think the whole notion of
"vendor update available when disclosure happens" is nothing but vendor
politics, and doesn't help _users_ one whit. The only thing it does is
allow the vendor to point fingers and say "hey, we have an update, now
it's your problem".

In reality, the user usually needs to have the update available _before_
the disclosure anyway. Preferably by _months_, not minutes.

So I think the whole vendor-sec thing is not helping users at all, it's
purely a "vendor embarassment" thing.
Post by Dave Jones
If you turned the current model upsidedown and vendor-sec learned
least give us *some* time, as opposed to just springing stuff
on us without warning.
I think kernel bugs should be fixed as soon as humanly possible, and _any_
delay is basically just about making excuses. And that means that as many
people as possible should know about the problem as early as possible,
because any closed list (or even just anybody sending a message to me
personally) just increases the risk of the thing getting lost and delayed
for the wrong reasons.

So I'd not personally mind some _totally_ open list. No embargo at all, no
limits on who reads it. The more, the merrier. However, I think my
personal preference is pretty extreme in one end, and I also think that
vendor-sec is extreme in the other. So there is probably some middle
ground.

Will it make everybody happy? Hell no. Nothing like that exists. Which is
why I'm willing to live with an embargo as long as I don't feel like I'm
being toyed with.

And hey, vendor-sec works. I feel like vendor-sec just toys with me, which
is why I refuse to have anything to do with it, but it's entirely possible
that the best solution is to just ignore my wishes. That's OK. I'm ok with
it, vendor-sec is ok with it, nobody is _happy_ with it, but it's another
compromise. Agreeing to disagree is fine too, after all.

So it's embarrassing to everybody if the kernel.org kernel has a security
hole for longer than vendor kernels, but at the same time, most _users_
run vendor kernels anyway, so maybe the current setup is the proper one,
and the kernel.org kernel _should_ be the last one to get the fix.
Whatever. I happen to believe in openness, and vendor-sec does not. It's
that simple.

But if we're seriously looking for a middle ground between my "it should
be open" and vendor-sec "it should be totally closed", that's where my
suggestions come in. Whether people _want_ to look for a middle ground is
the thing to ask first..

For example, having an arbitrarily long embargo on actual known exploit
code is fine with me. I don't care. If I have to promise to never ever
disclose an exploit code in order to see it, I'm fine with that - but I
refuse to delay the _fix_ by more than a few days, and even that "few
days" goes out the window if somebody else has knowingly delayed giving
the fix or problem to me in the first place.

This is not just about sw security, btw. I refuse to sign NDA's on hw
errata too. Same deal - it may mean that I get to know about the problem
later, but it also means that I don't have to feel guilty about knowing of
a problem and being unable to fix it. And it means that people can trust
_me_ personally.

Linus
Andrew Morton
2005-01-13 02:28:38 UTC
Permalink
Post by Linus Torvalds
Yes, I think delayed disclosure is broken. I think the whole notion of
"vendor update available when disclosure happens" is nothing but vendor
politics, and doesn't help _users_ one whit.
...
So I think the whole vendor-sec thing is not helping users at all, it's
purely a "vendor embarassment" thing.
That sounds a bit over-the-top to me, sorry.

AFAIUI, the vendor requirement is that they have time to have an upgraded
kernel package on their servers when the bug becomes public knowledge.

If correct and reasonable, then what is the best way in which we can
support them in this while promptly upgrading the kernel.org kernel?


Also:

I think we need to be more explicit in separating _types_ of security
problems. This recent controversy over the RLIM_MEMLOCK DoS is plain
silliness.

Look through the kernel changelogs for the past year - we've fixed a huge
number of "fix oops in foo" and "fix deadlock in bar" and "fix memory leak
in zot". All of these are of exactly the same severity as the rlimit bug,
and nobody cares, nobody is hurt.

The fuss over the rlimit problem occurred simply because some external
organisation chose to make a fuss over it.

IMO, local DoS holes are important mainly because buggy userspace
applications allow remote users to get in and exploit them, and for that
reason we of course need to fix them up. Even though such an attacker
could cripple the machine without exploiting such a hole.

For the above reasons I see no need to delay publication of local DoS holes
at all. The only thing for which we need to provide special processing is
privilege escalation bugs.

Or am I missing something?
Linus Torvalds
2005-01-13 02:51:36 UTC
Permalink
Post by Andrew Morton
That sounds a bit over-the-top to me, sorry.
Maybe a bit pointed, but the question is: would a user perhaps want to
know about a security fix a month earlier (knowing that bad people might
guess at it too), or want the security fix a month later (knowing that the
bad guys may well have known about the problem all the time _anyway_)?

Being public is different from being known about. If vendor-sec knows
about it, I don't find it at all unbelievable that some spam-virus writer
might know about it too.
Post by Andrew Morton
All of these are of exactly the same severity as the rlimit bug,
and nobody cares, nobody is hurt.
The fact is, 99% of the time, nobody really does care.
Post by Andrew Morton
The fuss over the rlimit problem occurred simply because some external
organisation chose to make a fuss over it.
I agree. And if i thad been out in the open all the time, the fuss simply
would not have been there.

I'm a big believer in _total_ openness. Accept the fact that bugs will
happen. Be open about them, and fix them as soon as possible. None of this
cloak-and-dagger stuff.

Linus
David Blomberg
2005-01-13 03:05:16 UTC
Permalink
Post by Linus Torvalds
Post by Andrew Morton
That sounds a bit over-the-top to me, sorry.
Maybe a bit pointed, but the question is: would a user perhaps want to
know about a security fix a month earlier (knowing that bad people might
guess at it too), or want the security fix a month later (knowing that the
bad guys may well have known about the problem all the time _anyway_)?
Being public is different from being known about. If vendor-sec knows
about it, I don't find it at all unbelievable that some spam-virus writer
might know about it too.
Post by Andrew Morton
All of these are of exactly the same severity as the rlimit bug,
and nobody cares, nobody is hurt.
The fact is, 99% of the time, nobody really does care.
Post by Andrew Morton
The fuss over the rlimit problem occurred simply because some external
organisation chose to make a fuss over it.
I agree. And if i thad been out in the open all the time, the fuss simply
would not have been there.
I'm a big believer in _total_ openness. Accept the fact that bugs will
happen. Be open about them, and fix them as soon as possible. None of this
cloak-and-dagger stuff.
Linus
Devils-advocate: Who is on the vendor-sec list? as I have started
devloping a roll your own linux dsitro (as 100s of other have as well) who
decides who is "approved" to hear about the fixes beforehand-what makes
SuSE, and Red Hat more deserving than Bonzai) User Base?
inhouse-developrs?. I agree with Linus-san openness is best all around.
the rest is mostly politics.
--
David Blomberg
***@davelinux.com
AIS, APS, ASE, CCNA, Linux+, LCA, LCP, LPI I, MCP, MCSA, MCSE, RHCE, Server+
Greg KH
2005-01-13 02:56:06 UTC
Permalink
Post by Andrew Morton
IMO, local DoS holes are important mainly because buggy userspace
applications allow remote users to get in and exploit them, and for that
reason we of course need to fix them up. Even though such an attacker
could cripple the machine without exploiting such a hole.
For the above reasons I see no need to delay publication of local DoS holes
at all. The only thing for which we need to provide special processing is
privilege escalation bugs.
Or am I missing something?
So, a "classification" of the severity of the bug would cause different
type of disclosures? That's a good idea in theory, but trying to nail
down specific for bug classifications tends to be difficult.

Although I think both Red Hat and SuSE have a classification system in
place already that might help out here.

Anyway, if so, I like it. I think that would be a good thing to have,
if for no other reason that I don't want to see security announcements
for every single driver bug that's patched that had caused a user
created oops.

thanks,

greg k-h
Chris Wright
2005-01-13 03:01:09 UTC
Permalink
Post by Andrew Morton
AFAIUI, the vendor requirement is that they have time to have an upgraded
kernel package on their servers when the bug becomes public knowledge.
Yup.
Post by Andrew Morton
If correct and reasonable, then what is the best way in which we can
support them in this while promptly upgrading the kernel.org kernel?
Most projects inform vendors with enough heads-up time to let them get
their stuff together and out the door.
Post by Andrew Morton
IMO, local DoS holes are important mainly because buggy userspace
applications allow remote users to get in and exploit them, and for that
reason we of course need to fix them up. Even though such an attacker
could cripple the machine without exploiting such a hole.
For the above reasons I see no need to delay publication of local DoS holes
at all. The only thing for which we need to provide special processing is
privilege escalation bugs.
Or am I missing something?
No, that's pretty similar to CVE allocation. At one time, there was
little effort even put into allocating CVE entries for local DoS holes.
It's not that they aren't important, but less critical than remote DoS
issues, and way less so than anything priv escalation related.

thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
Dave Jones
2005-01-13 03:35:42 UTC
Permalink
Post by Andrew Morton
IMO, local DoS holes are important mainly because buggy userspace
applications allow remote users to get in and exploit them, and for that
reason we of course need to fix them up. Even though such an attacker
could cripple the machine without exploiting such a hole.
For the above reasons I see no need to delay publication of local DoS holes
at all. The only thing for which we need to provide special processing is
privilege escalation bugs.
Or am I missing something?
The problem is it depends on who you are, and what you're doing with Linux
how much these things affect you.

A local DoS doesn't both me one squat personally, as I'm the only
user of computers I use each day. An admin of a shell server or
the like however would likely see this in a different light.
(though it can be argued a mallet to the kneecaps of the user
responsible is more effective than any software update)

An information leak from kernel space may be equally as mundane to some,
though terrifying to some admins. Would you want some process to be
leaking your root password, credit card #, etc to some other users process ?

priveledge escalation is clearly the number one threat. Whilst some
class 'remote root hole' higher risk than 'local root hole', far
too often, we've had instances where execution of shellcode by
overflowing some buffer in $crappyapp has led to a shell
turning a local root into a remote root.

For us thankfully, exec-shield has trapped quite a few remotely
exploitable holes, preventing the above.

Dave
Andrew Morton
2005-01-13 03:42:39 UTC
Permalink
Post by Dave Jones
Post by Andrew Morton
IMO, local DoS holes are important mainly because buggy userspace
applications allow remote users to get in and exploit them, and for that
reason we of course need to fix them up. Even though such an attacker
could cripple the machine without exploiting such a hole.
For the above reasons I see no need to delay publication of local DoS holes
at all. The only thing for which we need to provide special processing is
privilege escalation bugs.
Or am I missing something?
The problem is it depends on who you are, and what you're doing with Linux
how much these things affect you.
A local DoS doesn't both me one squat personally, as I'm the only
user of computers I use each day. An admin of a shell server or
the like however would likely see this in a different light.
(though it can be argued a mallet to the kneecaps of the user
responsible is more effective than any software update)
yup. But there are so many ways to cripple a Linux box once you have local
access. Another means which happens to be bug-induced doesn't seem
important.
Post by Dave Jones
An information leak from kernel space may be equally as mundane to some,
though terrifying to some admins. Would you want some process to be
leaking your root password, credit card #, etc to some other users process ?
priveledge escalation is clearly the number one threat. Whilst some
class 'remote root hole' higher risk than 'local root hole', far
too often, we've had instances where execution of shellcode by
overflowing some buffer in $crappyapp has led to a shell
turning a local root into a remote root.
I'd place information leaks and privilege escalations into their own class,
way above "yet another local DoS".

A local privilege escalation hole should be viewed as seriously as a remote
privilege escalation hole, given the bugginess of userspace servers, yes?
Chris Wright
2005-01-13 03:54:10 UTC
Permalink
Post by Andrew Morton
yup. But there are so many ways to cripple a Linux box once you have local
access. Another means which happens to be bug-induced doesn't seem
important.
That depends on the environment. If it's already locked down via MAC
and rlimits, etc. and the bug now creates a DoS that wasn't there before
it may be important. But, as a general rule of thumb, local DoS
is much less severe than other bugs, I fully agree.
Post by Andrew Morton
Post by Dave Jones
An information leak from kernel space may be equally as mundane to some,
though terrifying to some admins. Would you want some process to be
leaking your root password, credit card #, etc to some other users process ?
priveledge escalation is clearly the number one threat. Whilst some
class 'remote root hole' higher risk than 'local root hole', far
too often, we've had instances where execution of shellcode by
overflowing some buffer in $crappyapp has led to a shell
turning a local root into a remote root.
I'd place information leaks and privilege escalations into their own class,
way above "yet another local DoS".
Yes, me too.
Post by Andrew Morton
A local privilege escalation hole should be viewed as seriously as a remote
privilege escalation hole, given the bugginess of userspace servers, yes?
Absolutely, yes. Local root hole all too often == remote root hole.

thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
William Lee Irwin III
2005-01-13 04:49:42 UTC
Permalink
Post by Andrew Morton
Post by Dave Jones
The problem is it depends on who you are, and what you're doing with Linux
how much these things affect you.
A local DoS doesn't both me one squat personally, as I'm the only
user of computers I use each day. An admin of a shell server or
the like however would likely see this in a different light.
(though it can be argued a mallet to the kneecaps of the user
responsible is more effective than any software update)
yup. But there are so many ways to cripple a Linux box once you have local
access. Another means which happens to be bug-induced doesn't seem
important.
This is too broad and sweeping of a statement, and can be used to
excuse almost any bug triggerable only by local execution.

Most of the local DoS's I'm aware of are memory management -related,
i.e. user- triggerable proliferation of pinned kernel data structures.
Beancounter patches were meant to address at least part of that. Paging
the larger kernel data structures users can trigger proliferation of
would also be a large help.
Post by Andrew Morton
Post by Dave Jones
An information leak from kernel space may be equally as mundane to some,
though terrifying to some admins. Would you want some process to be
leaking your root password, credit card #, etc to some other users process ?
priveledge escalation is clearly the number one threat. Whilst some
class 'remote root hole' higher risk than 'local root hole', far
too often, we've had instances where execution of shellcode by
overflowing some buffer in $crappyapp has led to a shell
turning a local root into a remote root.
I'd place information leaks and privilege escalations into their own class,
way above "yet another local DoS".
A local privilege escalation hole should be viewed as seriously as a remote
privilege escalation hole, given the bugginess of userspace servers, yes?
I agree on the latter count. On the first, I have to dissent with the
assessment of local DoS's as "unimportant".


-- wli
Andrew Morton
2005-01-13 06:54:12 UTC
Permalink
Post by William Lee Irwin III
Most of the local DoS's I'm aware of are memory management -related,
i.e. user- triggerable proliferation of pinned kernel data structures.
Well. A heck of a lot of the DoS opportunities we've historically seen
involved memory leaks, deadlocks or making the kernel go oops or BUG with
locks held or with kernel memory allocated.
William Lee Irwin III
2005-01-13 07:19:49 UTC
Permalink
Post by Andrew Morton
Post by William Lee Irwin III
Most of the local DoS's I'm aware of are memory management -related,
i.e. user- triggerable proliferation of pinned kernel data structures.
Well. A heck of a lot of the DoS opportunities we've historically seen
involved memory leaks, deadlocks or making the kernel go oops or BUG with
locks held or with kernel memory allocated.
I'd consider those even more severe.


-- wli
Matt Mackall
2005-01-13 07:25:58 UTC
Permalink
Post by Andrew Morton
Post by William Lee Irwin III
Most of the local DoS's I'm aware of are memory management -related,
i.e. user- triggerable proliferation of pinned kernel data structures.
Well. A heck of a lot of the DoS opportunities we've historically seen
involved memory leaks, deadlocks or making the kernel go oops or BUG with
locks held or with kernel memory allocated.
I think we can probably exclude root-only local DoS from the full
embargo treatment for starters. The recent /dev/random sysctl one was
in that category.

I can imagine some local DoS bugs that are worth keeping a lid on for
a bit. Classic F00F bug may have been a good example. But hole in an
arbitrary driver may not.
--
Mathematics is the supreme nostalgia of our time.
Linus Torvalds
2005-01-13 04:48:57 UTC
Permalink
Post by Dave Jones
For us thankfully, exec-shield has trapped quite a few remotely
exploitable holes, preventing the above.
One thing worth considering, but may be abit _too_ draconian, is a
capability that says "can execute ELF binaries that you can write to".

Without that capability set, you can only execute binaries that you cannot
write to, and that you cannot _get_ write permission to (ie you can't be
the owner of them either - possibly only binaries where the owner is
root).

Sure, that's clearly not viable for a developer or even somebody who
maintains his own machine, but it _is_ probably viable for pretty much any
user that is afraid of compiling stuff him/herself and just gets signed
rpm's that install as root anyway. And it should certainly be viable for
somebody like "nobody" or "ftp" or "apache".

And I suspect there is almost zero overlap between the "developer
workstation" kind of setup (where the above is just not workable) and
"server or end-user desktop" setup where it might work.

A lot of the local root exploits depend on being able to run code that
doesn't come pre-installed on the system. A hole in a user-level server
may get you local shell access, but you generally need another stage to
get elevated privileges and _really_ mess with the machine.

Quite frankly, nobody should ever depend on the kernel having zero holes.
We do our best, but if you want real security, you should have other
shields in place. exec-shield is one. So is using a compiler that puts
guard values on the stack frame (immunix, I think). But so is saying "you
can't just compile or download your own binaries, nyaah, nyaah, nyaah".

As I've already made clear, I don't believe one whit in the "secrecy"
approach to security. I believe that "security through obscurity" can
actually be one valid level of security (after all, in the extreme case,
that's all a password ever really is).

So I believe that in the case of hiding vulnerabilities, any "security
gain" from the obscurity is more than made up for by all the security you
lose though delaying action and not giving people information about the
problem.

I realize people disagree with me, which is also why I don't in any way
take vendor-sec as a personal affront or anything like that: I just think
it's a mistake, and am very happy to be vocal about it, but hey, the
fundamental strength of open source is exactly the fact that people don't
have to agree about everything.

Linus
Barry K. Nathan
2005-01-13 05:51:30 UTC
Permalink
Post by Linus Torvalds
Quite frankly, nobody should ever depend on the kernel having zero holes.
We do our best, but if you want real security, you should have other
shields in place. exec-shield is one. So is using a compiler that puts
That reminds me...

What are the chances of exec-shield making it into mainline anytime
in the near future? It's the *big* feature that has me preferring
Red Hat/Fedora vendor kernels over mainline kernels, even on non-Red
Hat/Fedora distributions. (I know that parts of exec-shield are already in
mainline, but I'm wondering about the parts that haven't been merged yet.)

-Barry K. Nathan <***@pobox.com>
Matt Mackall
2005-01-13 07:28:51 UTC
Permalink
Post by Linus Torvalds
Post by Dave Jones
For us thankfully, exec-shield has trapped quite a few remotely
exploitable holes, preventing the above.
One thing worth considering, but may be abit _too_ draconian, is a
capability that says "can execute ELF binaries that you can write to".
Without that capability set, you can only execute binaries that you cannot
write to, and that you cannot _get_ write permission to (ie you can't be
the owner of them either - possibly only binaries where the owner is
root).
We can do that now with a combination of read-only and no-exec mounts.
--
Mathematics is the supreme nostalgia of our time.
Willy Tarreau
2005-01-13 07:42:34 UTC
Permalink
Post by Matt Mackall
Post by Linus Torvalds
Post by Dave Jones
For us thankfully, exec-shield has trapped quite a few remotely
exploitable holes, preventing the above.
One thing worth considering, but may be abit _too_ draconian, is a
capability that says "can execute ELF binaries that you can write to".
Without that capability set, you can only execute binaries that you cannot
write to, and that you cannot _get_ write permission to (ie you can't be
the owner of them either - possibly only binaries where the owner is
root).
We can do that now with a combination of read-only and no-exec mounts.
That's why some hardened distros ship with everything R/O (except var) and
/var non-exec.

Willy
David Lang
2005-01-13 08:02:01 UTC
Permalink
Post by Willy Tarreau
Post by Matt Mackall
Post by Linus Torvalds
Post by Dave Jones
For us thankfully, exec-shield has trapped quite a few remotely
exploitable holes, preventing the above.
One thing worth considering, but may be abit _too_ draconian, is a
capability that says "can execute ELF binaries that you can write to".
Without that capability set, you can only execute binaries that you cannot
write to, and that you cannot _get_ write permission to (ie you can't be
the owner of them either - possibly only binaries where the owner is
root).
We can do that now with a combination of read-only and no-exec mounts.
That's why some hardened distros ship with everything R/O (except var) and
/var non-exec.
this only works if you have no reason to mix the non-exec and R/O stuff in
the same directory (there is some software that has paths for stuff hard
coded that will not work without them being togeather)

also it gives you no ability to maintain the protection for normal users
at the same time that an admin updates the system. Linus's proposal would
let you five this cap to the normal users, but still let the admin manage
the box normally.

David Lang
--
There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies.
-- C.A.R. Hoare
Willy Tarreau
2005-01-13 10:05:42 UTC
Permalink
Post by David Lang
Post by Willy Tarreau
That's why some hardened distros ship with everything R/O (except var) and
/var non-exec.
this only works if you have no reason to mix the non-exec and R/O stuff
in the same directory (there is some software that has paths for stuff
hard coded that will not work without them being togeather)
Symlinks are the solution against this breakage. And if your software comes
from the dos world where temporary files are stored in the same directory
as the binaries (remember SET TEMP=C:\DOS ?) then you have no possibility at
all, but the application design by itself should be frightening enough to keep
away from it.
Post by David Lang
also it gives you no ability to maintain the protection for normal users
at the same time that an admin updates the system. Linus's proposal would
let you five this cap to the normal users, but still let the admin manage
the box normally.
That's perfectly true. What I explained was not meant to be a universal
solution, but an easy step forward.

Willy
Christoph Hellwig
2005-01-13 08:23:20 UTC
Permalink
Post by Linus Torvalds
Without that capability set, you can only execute binaries that you cannot
write to, and that you cannot _get_ write permission to (ie you can't be
the owner of them either - possibly only binaries where the owner is
root).
I think this is called "mount user-writeable filesystems with -noexec" ;-)
Linus Torvalds
2005-01-13 16:38:03 UTC
Permalink
Post by Christoph Hellwig
2B
Post by Linus Torvalds
Without that capability set, you can only execute binaries that you cannot
write to, and that you cannot _get_ write permission to (ie you can't be
the owner of them either - possibly only binaries where the owner is
root).
I think this is called "mount user-writeable filesystems with -noexec" ;-)
You miss the point.

It wouldn't be a global flag. It's a per-process flag. For example, many
people _do_ need to execute binaries in their home directory. I do it all
the time. I know what a compiler is.

Others do not necessarily do that. Sure, you could mount each users home
directory separately with a bind mount, but that's not only inconvenient,
it also misses the point - it's not about _where_ the binary is, it's
about _who_ runs it.

What is the real issue with MS security? Is it that NT is findamentally a
weak kernel? Hey, maybe. Or maybe not. More likely it's the mindset that
you trust everything, regardless of where they are. Most users are admins,
and you run any code you see (or don't see) by default, whether it's in an
email attachement or whatever.

Containment is what real security is about. Everybody knows bugs happen,
and that people do stupid things. Developers, users, whatever. We all do.

For example, in many environments it could possibly be a good idea to make
even _root_ have the "can run non-root binaries flag" clear by default.
Imagine a system that booted up that way, and used PAM to enable non-root
binaries on a per-user basis (for developers who need it or otherwise
people who are trusted to have their own binaries). Think about what that
means...

Every single deamon in the system would have the flag clear by default.
You take over the web-server, and the most you have to play with are the
binaries that are already installed on the system (and the code you can
inject directly into the web server process from outside - that's likely
to be the _real_ security hazard).

It's just another easy containment. It's not real security in itself, but
_no_ single thing is "real security". You just add containment, to the
point where it gets increasingly difficult to get to some state where you
can do lots of damage (in a perfect world, exponentially more so, but
these containments are seldom independent or each other).

NOTE! I'd personally hate some of the security things. For example, I
think the "randomize code addresses" is absolutely horrible, just because
of the startup overhead it implies (specifically no pre-linking). I also
immensely dislike exec-shield because of the segment games it plays - I
think it makes sense in the short run but not in the long run, so I much
prefer that one as a "vendor feature", not as a "core feature".

So when I talk about security, I have this double-standard where I end up
convinced that many features are things that _I_ should not do, but
others likely should ;)

Linus
Arjan van de Ven
2005-01-13 17:01:02 UTC
Permalink
Post by Linus Torvalds
NOTE! I'd personally hate some of the security things. For example, I
think the "randomize code addresses" is absolutely horrible, just because
of the startup overhead it implies (specifically no pre-linking). I also
immensely dislike exec-shield because of the segment games it plays - I
think it makes sense in the short run but not in the long run, so I much
prefer that one as a "vendor feature", not as a "core feature".
I think you are somewhat misguided on these: the randomisation done in
FC does NOT prohibit prelink for working, with the exception of special
PIE binaries. Does this destroy the randomisation? No: prelink *itself*
randomizes the addresses when creating it's prelink database (which is
in fedora once every two weeks with a daily incremental run inbetween;
the bi-weekly run is needed anyway to properly deal with new and updated
software, the daily runs are stopgapping only). This makes all *systems*
different, even though runs of the same app on the same machine right
after eachother are the same for the library addresses only.
That does not destroy the value of randomisation; it limits it slightly,
since this ONLY matters for libraries, not for the stack or heap and the
other things that get randomized.

As for the segment limits (you call them execshield, but execshield is
actually a whole bunch of stuff that happens to include segment limits;
a bit like tree and forrest ;) yes they probably should remain a vendor
feature, no argument about that.
Linus Torvalds
2005-01-13 17:19:16 UTC
Permalink
Post by Arjan van de Ven
I think you are somewhat misguided on these: the randomisation done in
FC does NOT prohibit prelink for working, with the exception of special
PIE binaries. Does this destroy the randomisation? No: prelink *itself*
randomizes the addresses when creating it's prelink database
There was a kernel-based randomization patch floating around at some
point, though. I think it's part of PaX. That's the one I hated.

Although I haven't seen it in a long time, so you may well be right that
that one too is fine.

My point was really more about the generic issue of me being two-faced:
I'll encourage people to do things that I don't actually like myself in
the standard kernel.

I just think that forking at some levels is _good_. I like the fact that
different vendors have different objectives, and that there are things
like Immunix and PaX etc around. Of course, the problem that sometimes
results in is the very fact that because I encourage others to have
special patches, they en dup not even trying to feed back _parts_ of them.

In this case I really believe that was the case. There are fixes in PaX
that make sense for the standard kernel. But because not _all_ of PaX
makes sense for the standard kernel, and because I will _not_ take their
patch whole-sale, they apparently believe (incorrectly) that I wouldn't
even take the non-intrusive fixes, and haven't really even tried to feed
them back.

(Yes, Brad Spengler has talked to me about PaX, but never sent me
individual patches, for example. People seem to expect me to take all or
nothing - and there's a _lot_ of pretty extreme people out there that
expect everybody else to be as extreme as they are..)

Linus
John Richard Moser
2005-01-13 18:31:07 UTC
Permalink
Post by Linus Torvalds
Post by Arjan van de Ven
I think you are somewhat misguided on these: the randomisation done in
FC does NOT prohibit prelink for working, with the exception of special
PIE binaries. Does this destroy the randomisation? No: prelink *itself*
randomizes the addresses when creating it's prelink database
There was a kernel-based randomization patch floating around at some
point, though. I think it's part of PaX. That's the one I hated.
PaX and Exec Shield both have them; personally I believe PaX is a more
mature technology, since it's 1) still actively developed, and 2) been
around since late 2000. The rest of the community dissagrees with me of
course, but whatever; let's not get into PMS matches on whose junk is
better than whose.
Post by Linus Torvalds
Although I haven't seen it in a long time, so you may well be right that
that one too is fine.
I'll encourage people to do things that I don't actually like myself in
the standard kernel.
I just think that forking at some levels is _good_. I like the fact that
different vendors have different objectives, and that there are things
like Immunix and PaX etc around.
I use the argument that the 2.6 development model being used as 'stable'
hurts this all the time, and people (not you Linus) have fed back to me
that "they should submit their patches to mainline then."
Post by Linus Torvalds
Of course, the problem that sometimes
results in is the very fact that because I encourage others to have
special patches, they en dup not even trying to feed back _parts_ of them.
In this case I really believe that was the case. There are fixes in PaX
that make sense for the standard kernel.
Yes, there's fixes that should go in to mainline often, aside from the
added functionality. I think these should be split out and distributed
*shrug*
Post by Linus Torvalds
But because not _all_ of PaX
makes sense for the standard kernel,
Personally I believe it does, for social engineering reasons (encourage
software developers to be mindful of the more secure setting). That
being said, every part of PaX is an option, so even if it went mainline,
it'd be disabled where inappropriate anyway.
Post by Linus Torvalds
and because I will _not_ take their
patch whole-sale, they apparently believe (incorrectly) that I wouldn't
even take the non-intrusive fixes, and haven't really even tried to feed
them back.
(Yes, Brad Spengler has talked to me about PaX, but never sent me
individual patches, for example. People seem to expect me to take all or
nothing - and there's a _lot_ of pretty extreme people out there that
expect everybody else to be as extreme as they are..)
Things like PaX actually have to be taken all or nothing for a reason.
This doesn't mean they have to come with all the GrSecurity
enhancements; although those help as well.

PaX supplies two major components: enhanced memory protections,
particularly using the PROT_EXEC marking (hardawer or otherwise); and
address space layout randomization.

For now I'll set aside the emulations on x86, but I'll cover that later.

First, let's look at ASLR. ASLR can be defeated if you can inject code
to read (if I understand correctly) %efp and locate the global offset
table. Thus, ASLR is pretty much useless.

If we look at executable space protections, the PROTECTIONS&(~PROT_EXEC)
can be changed by returning to mprotect(); since PaX restricts
mprotect(), you have to return to open() and write() and mmap(), but
same deal. Either way, the memory space protections can be defeated by
ret2libc, so these are also pretty much useless.

Examining further, you should consider deploying ASLR in conjunction
with proper memory space protections. In this situation, ASLR must be
defeated before the memory protections can be defeated; and the memory
protections must be defeated before you can defeat ASLR. *->ASLR->NX->*
continuous circle.

This makes defeating the ASLR/NX combination a paradox; you can't have
both at the same time, you can't have one without the other. The only
logical possibility is to do neither. (it's actually possible to defeat
it, but only by completely random guessing and one hell of a stroke of luck)

Going back to the emulation, there's no NX protections without an NX
bit; so for any of this to have any point at all on x86--the most
popular desktop platform ATM--you need to emulate an NX bit.

I can see where you wouldn't want to put in a superpatch like PaX, and
I'm not saying you should jump up right now and go merge it with
mainline; but I feel it's important that you understand that each part
of PaX compliments the others to form a network of protections that
reciprocate upon eachother. Each piece would fail without the others to
control their shortfallings; but together they've got everything pretty
well covered.
Post by Linus Torvalds
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
Arjan van de Ven
2005-01-13 17:45:37 UTC
Permalink
Post by Linus Torvalds
Post by Arjan van de Ven
I think you are somewhat misguided on these: the randomisation done in
FC does NOT prohibit prelink for working, with the exception of special
PIE binaries. Does this destroy the randomisation? No: prelink *itself*
randomizes the addresses when creating it's prelink database
There was a kernel-based randomization patch floating around at some
point, though. I think it's part of PaX. That's the one I hated.
Although I haven't seen it in a long time, so you may well be right that
that one too is fine.
I don't know about the pax one, we were careful with the fc one to not
break prelink for obvious reasons ;)
Post by Linus Torvalds
I just think that forking at some levels is _good_. I like the fact that
different vendors have different objectives, and that there are things
like Immunix and PaX etc around. Of course, the problem that sometimes
results in is the very fact that because I encourage others to have
special patches, they en dup not even trying to feed back _parts_ of them.
actually I was hoping to feed some bits of execshield (eg the
randomisation) to you sometime in the next weeks/months, after a
thorough cleaning of the code, and defaulting to off.

The code can be made quite reasonable I suspect if I manage to find a
few hours to clean it up some
(the pre-cleanup patch is at
http://www.kernel.org/pub/linux/kernel/people/arjan/execshield/00-
randomize-A0

in case you want to see for yourself)

that patch randomizes the stack (well already done via an x86 specific
hack in the existing kernel, this pulls that more generic)
the brk start
the start of mmap space (but leaves mmaps where the app gives a hint for
the address alone, like ld.so does for prelinked libs)
Alan Cox
2005-01-13 16:12:37 UTC
Permalink
Post by Linus Torvalds
It wouldn't be a global flag. It's a per-process flag. For example, many
people _do_ need to execute binaries in their home directory. I do it all
the time. I know what a compiler is.
noexec has never been worth anything because of scripts. Kernel won't
load that binary, I can write a script to do it.
Chris Wright
2005-01-13 17:49:42 UTC
Permalink
Post by Alan Cox
Post by Linus Torvalds
It wouldn't be a global flag. It's a per-process flag. For example, many
people _do_ need to execute binaries in their home directory. I do it all
the time. I know what a compiler is.
noexec has never been worth anything because of scripts. Kernel won't
load that binary, I can write a script to do it.
Scripts can only do what the interpreter does. And it's often a lot harder
to get the interpreter to do certain things. For example, you simply
_cannot_ get any thread race conditions with most scripts out there, nor
can you generally use magic mmap patterns.
I think perl has threads and some type of free form syscall ability.
Heck, with a legit elf binary and gdb you can get a long ways. But I
agree in two things. 1) It's all about layers, since there is no silver
bullet, and 2) Containment goes a long ways to mitigate damage.

thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
Linus Torvalds
2005-01-13 17:33:38 UTC
Permalink
Post by Alan Cox
Post by Linus Torvalds
It wouldn't be a global flag. It's a per-process flag. For example, many
people _do_ need to execute binaries in their home directory. I do it all
the time. I know what a compiler is.
noexec has never been worth anything because of scripts. Kernel won't
load that binary, I can write a script to do it.
Scripts can only do what the interpreter does. And it's often a lot harder
to get the interpreter to do certain things. For example, you simply
_cannot_ get any thread race conditions with most scripts out there, nor
can you generally use magic mmap patterns.

Am I claiming that disallowing self-written ELF binaries gets rid of all
security holes? Obviously not. I'm claiming that there are things that
people can do that make it harder, and that _real_ security is not about
trusting one subsystem, but in making it hard enough in many independent
ways that it's just too effort-intensive to attack.

It's the same thing with passwords. Clearly any password protected system
can be broken into: you just have to guess the password. It then becomes a
matter of how hard it is to "guess" - at some point you say a password is
secure not because it is a password, but because it's too _expensive_ to
guess/break.

So all security issues are about balancing cost vs gain. I'm convinced
that the gain from openness is higher than the cost. Others will disagree.

Linus
John Richard Moser
2005-01-13 18:59:19 UTC
Permalink
[...]
Post by Linus Torvalds
Am I claiming that disallowing self-written ELF binaries gets rid of all
security holes? Obviously not. I'm claiming that there are things that
people can do that make it harder, and that _real_ security is not about
trusting one subsystem, but in making it hard enough in many independent
ways that it's just too effort-intensive to attack.
I think you can make it non-guaranteeable.
Post by Linus Torvalds
It's the same thing with passwords. Clearly any password protected system
can be broken into: you just have to guess the password. It then becomes a
matter of how hard it is to "guess" - at some point you say a password is
secure not because it is a password, but because it's too _expensive_ to
guess/break.
You can't guarantee you can guess a password. You could for example
write a pam module that mandates a 3 second delay on failed
authentication for a user (it does it for the console currently; use 3
separate consoles and you can do the attack 3 times faster). Now you
have to guess the password with one try every 3 seconds.

aA1# 96 possible values per character, 8 characters. 7.2139x10^15
combinations. It takes 686253404.7 years to go through all those at one
every 3 seconds. You've got a good chance at half that.

This isn't "hard," it's "infeasible." I think the idea is to make it so
an attacker doesn't have to put lavish amounts of work into creating an
exploit that reliably re-exploits a hole over and over again; but to
make it so he can't make an exploit that actually works, unless it works
only by rediculously remote chance.
Post by Linus Torvalds
So all security issues are about balancing cost vs gain. I'm convinced
that the gain from openness is higher than the cost. Others will disagree.
Yes. Nobody code audits your binaries. You need source code to do
source code auditing. :)
Post by Linus Torvalds
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
John Richard Moser
2005-01-13 19:35:46 UTC
Permalink
[...]
Post by John Richard Moser
You can't guarantee you can guess a password. You could for example
write a pam module that mandates a 3 second delay on failed
authentication for a user (it does it for the console currently; use 3
separate consoles and you can do the attack 3 times faster). Now you
have to guess the password with one try every 3 seconds.
Already done, actually standard practice. This does not mean actually that you
can not guess a password, just that it will take longer (on average).
Luck and some knowledge about the system and people speeds up the process, so
the standard procedure if you really want to get into a system with a
password is to get information.
I'm pretty sure that you only get a 3 second delay on the specific
console. I've mistyped my root password on tty1, and switched to tty2
to log in before the delay was up.

as a test, switch to vc/0 and enter 'root', then press enter. Type a
bogus password.

Switch to vc/1, and enter 'root', then press enter. Type your real root
password.

Go back to vc/0 and hit enter so you submit your false password, then
immediately switch to vc/1 and hit enter.

You should get a bash shell and have enough time to switch to vc/0 and
see it still waiting for a second or two, before returning "login
incorrect."

Automating an attack on about 10 different ssh connections shouldn't be
a problem. Just keep creating them.
Post by John Richard Moser
aA1# 96 possible values per character, 8 characters. 7.2139x10^15
combinations. It takes 686253404.7 years to go through all those at one
every 3 seconds. You've got a good chance at half that.
This isn't "hard," it's "infeasible." I think the idea is to make it so
an attacker doesn't have to put lavish amounts of work into creating an
exploit that reliably re-exploits a hole over and over again; but to
make it so he can't make an exploit that actually works, unless it works
only by rediculously remote chance.
Post by Linus Torvalds
So all security issues are about balancing cost vs gain. I'm convinced
that the gain from openness is higher than the cost. Others will disagree.
Yes. Nobody code audits your binaries. You need source code to do
source code auditing. :)
Post by Linus Torvalds
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel"
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
Linus Torvalds
2005-01-13 19:46:10 UTC
Permalink
Post by John Richard Moser
Post by Linus Torvalds
So all security issues are about balancing cost vs gain. I'm convinced
that the gain from openness is higher than the cost. Others will disagree.
Yes. Nobody code audits your binaries. You need source code to do
source code auditing. :)
Oh, it's very clear that some exploits have definitely been written by
looking at the source code with automated tools or by instrumenting
things, and that the exploits would likely have never been found without
source code. That's fine. We just have higher requirements in the open
source community.

And I do think that the same is true for being open about security
advisories: I think that to offset an open security list, we'd have to
then have more "best practices" than a vendor-sec-type closed security
list might need. I think it would be worth it.

Linus
John Richard Moser
2005-01-13 19:57:33 UTC
Permalink
Post by Linus Torvalds
Post by John Richard Moser
Post by Linus Torvalds
So all security issues are about balancing cost vs gain. I'm convinced
that the gain from openness is higher than the cost. Others will disagree.
Yes. Nobody code audits your binaries. You need source code to do
source code auditing. :)
Oh, it's very clear that some exploits have definitely been written by
looking at the source code with automated tools or by instrumenting
things, and that the exploits would likely have never been found without
source code. That's fine. We just have higher requirements in the open
source community.
Yeah but malicious people are more determined than whitehats and
greyhats. If I'm trying to find bugs to help you fix them, I'm not
going to waste my time on running your binaries through a debugger. If
I want to use your machine as a sock puppet to attack SCO, then maybe.

In contrast, if I've got a good background in programming and want to
help you find and fix security bugs, it's not that big a deal for me to
brush over your source code. If I'm just in there to improve it or add
new features, I might even ACCIDENTALLY stumble over something. This is
where OSS becomes more secure :)

I think we're on the same page, Linus :)
Post by Linus Torvalds
And I do think that the same is true for being open about security
advisories: I think that to offset an open security list, we'd have to
then have more "best practices" than a vendor-sec-type closed security
list might need. I think it would be worth it.
It'd need control. You can start an open security advisory list if you
like, but don't just flip off the vendors who want to keep their
security advisories quiet until they have a fix.

Aside from that, go for it.
Post by Linus Torvalds
Linus
- --
All content of all messages exchanged herein are left in the
Public Domain, unless otherwise explicitly stated.
Norbert van Nobelen
2005-01-13 19:22:03 UTC
Permalink
Post by John Richard Moser
[...]
Post by Linus Torvalds
Am I claiming that disallowing self-written ELF binaries gets rid of all
security holes? Obviously not. I'm claiming that there are things that
people can do that make it harder, and that _real_ security is not about
trusting one subsystem, but in making it hard enough in many independent
ways that it's just too effort-intensive to attack.
I think you can make it non-guaranteeable.
Post by Linus Torvalds
It's the same thing with passwords. Clearly any password protected system
can be broken into: you just have to guess the password. It then becomes
a matter of how hard it is to "guess" - at some point you say a password
is secure not because it is a password, but because it's too _expensive_
to guess/break.
You can't guarantee you can guess a password. You could for example
write a pam module that mandates a 3 second delay on failed
authentication for a user (it does it for the console currently; use 3
separate consoles and you can do the attack 3 times faster). Now you
have to guess the password with one try every 3 seconds.
Already done, actually standard practice. This does not mean actually that you
can not guess a password, just that it will take longer (on average).
Luck and some knowledge about the system and people speeds up the process, so
the standard procedure if you really want to get into a system with a
password is to get information.
Post by John Richard Moser
aA1# 96 possible values per character, 8 characters. 7.2139x10^15
combinations. It takes 686253404.7 years to go through all those at one
every 3 seconds. You've got a good chance at half that.
This isn't "hard," it's "infeasible." I think the idea is to make it so
an attacker doesn't have to put lavish amounts of work into creating an
exploit that reliably re-exploits a hole over and over again; but to
make it so he can't make an exploit that actually works, unless it works
only by rediculously remote chance.
Post by Linus Torvalds
So all security issues are about balancing cost vs gain. I'm convinced
that the gain from openness is higher than the cost. Others will disagree.
Yes. Nobody code audits your binaries. You need source code to do
source code auditing. :)
Post by Linus Torvalds
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel"
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
--
<a href="http://www.edusupport.nl">EduSupport: Linux Desktop for schools and
small to medium business in The Netherlands and Belgium</a>
Alan Cox
2005-01-13 18:53:37 UTC
Permalink
Scripts can only do what the interpreter does. And it's often a lot harder
to get the interpreter to do certain things. For example, you simply
_cannot_ get any thread race conditions with most scripts out there, nor
can you generally use magic mmap patterns.
And then perl was invented.
Am I claiming that disallowing self-written ELF binaries gets rid of all
security holes? Obviously not. I'm claiming that there are things that
people can do that make it harder, and that _real_ security is not about
trusting one subsystem, but in making it hard enough in many independent
ways that it's just too effort-intensive to attack.
It lasts until someone publishes the first perl ELF loader/executor on
bugtraq, or ruby, or python, or java. Then everyone has it.
It's the same thing with passwords. Clearly any password protected system
can be broken into: you just have to guess the password. It then becomes a
matter of how hard it is to "guess" - at some point you say a password is
secure not because it is a password, but because it's too _expensive_ to
guess/break.
Its more like breaking a password algorithm or everyone having the same
password unfortunately. One perl ELF loader, game over. You can do this
stuff with SELinux but even then it is very hard and you have to whack
the interpreters.
William Lee Irwin III
2005-01-13 04:49:19 UTC
Permalink
Post by Dave Jones
Post by Andrew Morton
IMO, local DoS holes are important mainly because buggy userspace
applications allow remote users to get in and exploit them, and for that
reason we of course need to fix them up. Even though such an attacker
could cripple the machine without exploiting such a hole.
For the above reasons I see no need to delay publication of local DoS holes
at all. The only thing for which we need to provide special processing is
privilege escalation bugs.
Or am I missing something?
The problem is it depends on who you are, and what you're doing with Linux
how much these things affect you.
A local DoS doesn't both me one squat personally, as I'm the only
user of computers I use each day. An admin of a shell server or
the like however would likely see this in a different light.
(though it can be argued a mallet to the kneecaps of the user
responsible is more effective than any software update)
It deeply disturbs me to hear this kind of talk. If we're pretending to
be a single-user operating system, why on earth did we use UNIX as a
precedent in the first place?
Post by Dave Jones
An information leak from kernel space may be equally as mundane to some,
though terrifying to some admins. Would you want some process to be
leaking your root password, credit card #, etc to some other users process ?
priveledge escalation is clearly the number one threat. Whilst some
class 'remote root hole' higher risk than 'local root hole', far
too often, we've had instances where execution of shellcode by
overflowing some buffer in $crappyapp has led to a shell
turning a local root into a remote root.
For us thankfully, exec-shield has trapped quite a few remotely
exploitable holes, preventing the above.
If we give up and say we're never going to make multiuser use secure,
where is our distinction from other inherently insecure single-user OS's?


-- wli
Dave Jones
2005-01-13 05:19:33 UTC
Permalink
Post by William Lee Irwin III
Post by Dave Jones
The problem is it depends on who you are, and what you're doing with Linux
how much these things affect you.
A local DoS doesn't both me one squat personally, as I'm the only
user of computers I use each day. An admin of a shell server or
the like however would likely see this in a different light.
(though it can be argued a mallet to the kneecaps of the user
responsible is more effective than any software update)
It deeply disturbs me to hear this kind of talk. If we're pretending to
be a single-user operating system, why on earth did we use UNIX as a
precedent in the first place?
You completely missed my point. What's classed as a threat to one
user just isn't relevant to another.
Post by William Lee Irwin III
Post by Dave Jones
An information leak from kernel space may be equally as mundane to some,
though terrifying to some admins. Would you want some process to be
leaking your root password, credit card #, etc to some other users process ?
priveledge escalation is clearly the number one threat. Whilst some
class 'remote root hole' higher risk than 'local root hole', far
too often, we've had instances where execution of shellcode by
overflowing some buffer in $crappyapp has led to a shell
turning a local root into a remote root.
For us thankfully, exec-shield has trapped quite a few remotely
exploitable holes, preventing the above.
If we give up and say we're never going to make multiuser use secure,
where is our distinction from other inherently insecure single-user OS's?
Nowhere did I make that claim. If you parsed the comment about
exec-shield incorrectly, I should point out that we also issued
security updates to various applications even though (due to exec-shield)
our users weren't vulnerable. The comment was an indication that
the extra barrier has bought us some time in preparing updates
when 0-day exploits have been sprung on us unexpectedly on more
than one occasion.

Dave
Alan Cox
2005-01-13 15:36:30 UTC
Permalink
Post by Andrew Morton
For the above reasons I see no need to delay publication of local DoS holes
at all. The only thing for which we need to provide special processing is
privilege escalation bugs.
Or am I missing something?
Universities and web hosting companys see the DoS issue rather
differently sometimes. (Once we have Xen in the tree we'll have a good
answer)
Dave Jones
2005-01-13 03:25:06 UTC
Permalink
Post by Linus Torvalds
Yes, I think delayed disclosure is broken. I think the whole notion of
"vendor update available when disclosure happens" is nothing but vendor
politics, and doesn't help _users_ one whit.
The volume of traffic we as a vendor get every time an issue
makes news (and sadly even the insignificant issues seem to be
making news these days) from users wanting to know where our
updates are is a good indication that your thinking is clearly bogus.
Post by Linus Torvalds
The only thing it does is allow the vendor to point fingers and say "hey, we
have an update, now it's your problem".
I fail to see the point you're trying to make here.
Post by Linus Torvalds
So it's embarrassing to everybody if the kernel.org kernel has a security
hole for longer than vendor kernels, but at the same time, most _users_
run vendor kernels anyway, so maybe the current setup is the proper one,
and the kernel.org kernel _should_ be the last one to get the fix.
I think the timelyness isn't the issue, the issue is making sure that
the kernel.org kernel actually does end up getting the fixes.
That 2.6.10 got out of -rc with known vulnerabilities which were
known to be fixed in 2.6.9-ac is mind-boggling. That a 2.6.10.1
didn't follow up yet is equally so.

Part of the premise of the 'new' development model was that vendor kernels
were where people go for the 'super-stable kernel', and the kernel.org
kernel may not be quite so polished around the edges. This seems to
go against what you're saying in this thread which reads..
'kernel.org kernels might not be as stable as vendor kernels, but you're
going to need to run it if you want security holes fixed asap'
Post by Linus Torvalds
Whatever. I happen to believe in openness, and vendor-sec does not. It's
that simple.
That openness comes at a price. I don't need to bore you with
analogies, as you know as well as I do how wide and far Linux
is deployed these days, but doing this openly is just irresponsible.

Someone malicious on getting the announcement of a new kernel.org release
gets told exactly where the hole is and how to exploit it.
All they'll need to do is find a target running a vendor kernel before
updates get deployed. Whilst this is true to a certain degree
today, as not everyone deploys security updates in a timely manner
(some not at all), things can only get worse.

Dave
Marek Habersack
2005-01-13 03:53:31 UTC
Permalink
On Wed, Jan 12, 2005 at 10:25:06PM -0500, Dave Jones scribbled:
[snip]
Post by Dave Jones
Post by Linus Torvalds
Whatever. I happen to believe in openness, and vendor-sec does not. It's
that simple.
That openness comes at a price. I don't need to bore you with
analogies, as you know as well as I do how wide and far Linux
is deployed these days, but doing this openly is just irresponsible.
Someone malicious on getting the announcement of a new kernel.org release
gets told exactly where the hole is and how to exploit it.
All they'll need to do is find a target running a vendor kernel before
updates get deployed. Whilst this is true to a certain degree
today, as not everyone deploys security updates in a timely manner
(some not at all), things can only get worse.
That might be, but note one thing: not everybody runs vendor kernels (for various
reasons). Now see what happens when the super-secret vulnerability (with
vendor fixes) is described in an advisory. A person managing a park of machines
(let's say 100) with custom, non-vendor, kernels suddenly finds out that they
have a buggy kernel and 100 machines to upgrade while the exploit and the
description of the vuln are out in the wild. They have to port their
custom stuff to the new kernel, compile it, test it (at least a bit), deploy
on 100 machines and pray it doesn't break. During all that time (and the
whole process won't take a day or even two) the evil guys are far ahead of
the poor bastard managing the 100 machines (since all they need is one
exploit which will work on any of our admin's machines). One other factor
that makes it hard for such a person to apply the patches is simply that there
is no single place to find the security patches in. He goes to securityfocus.com,
for instance, and what does he find? A nice description of the vulnerability, a
discussion, a list of affected kernel versions and credits which usually
list vendor advisories and kernel versions and very rarely a link to an
archived mail message or a webpage with the patch. Hoping he'll find the
fixes in the vendor kernels, he goes to download source packages from SuSe,
RedHat or Trustix, Debian, Ubuntu, whatever and discovers that it is as easy
to find the patch there as it is to fish it out of the vanilla kernel patch
for the new version. Frustrating, isn't it? Not to mention that he might
need to backport the fix, if he runs an earlier version of the kernel.
And now assume that everything is as extremely open as Linus says - the
admin has the same access to the exact information the vendors on vendor-sec
have, together with the same fix they have (in form of a simple patch
available without fishing for it all over the place). He starts the race
with the bad guys exactly at the same time they start running looking for
the vulnerable machines on the 'Net. Priceless, IMHO.
I guess that, contrary to what you've just said above, hiding the
information is irresponsible.
Having said that, I don't think everything should be as extremely open as
Linus would want it to see, but rather the way he proposed (and which many
folks agreed to) with the 5-day (or so) embargo for the advisory release and
with the patch(es)/discussion openly available to anyone interested (based
on the premise that most people learn about vulnerabilities not from
security lists but from security bulletins, tech news sites, user forums etc.)

best regards,

marek
Barry K. Nathan
2005-01-13 05:38:07 UTC
Permalink
Post by Marek Habersack
archived mail message or a webpage with the patch. Hoping he'll find the
fixes in the vendor kernels, he goes to download source packages from SuSe,
RedHat or Trustix, Debian, Ubuntu, whatever and discovers that it is as easy
to find the patch there as it is to fish it out of the vanilla kernel patch
for the new version. Frustrating, isn't it? Not to mention that he might
http://linux.bkbits.net is your friend.

Each patch (including security fixes) in the mainline kernels (2.4 and
2.6) appears there as an individual, clickable link with a description
(e.g. "1.1551 Paul Starzetz: sys_uselib() race vulnerability
(CAN-2004-1235)").

If other patches have gone in since then, you may have to scroll through
a (short-form) changelog. However, it's still less frustrating than the
scenario you portray.

-Barry K. Nathan <***@pobox.com>
Florian Weimer
2005-01-13 08:59:00 UTC
Permalink
Post by Barry K. Nathan
Post by Marek Habersack
archived mail message or a webpage with the patch. Hoping he'll find the
fixes in the vendor kernels, he goes to download source packages from SuSe,
RedHat or Trustix, Debian, Ubuntu, whatever and discovers that it is as easy
to find the patch there as it is to fish it out of the vanilla kernel patch
for the new version. Frustrating, isn't it? Not to mention that he might
http://linux.bkbits.net is your friend.
Each patch (including security fixes) in the mainline kernels (2.4 and
2.6) appears there as an individual, clickable link with a description
(e.g. "1.1551 Paul Starzetz: sys_uselib() race vulnerability
(CAN-2004-1235)").
This is the exception. Usually, changelogs are cryptic, often
deliberately so. Do you still remember Alan's DMCA protest
changelogs?
Barry K. Nathan
2005-01-13 15:31:07 UTC
Permalink
Post by Florian Weimer
This is the exception. Usually, changelogs are cryptic, often
deliberately so. Do you still remember Alan's DMCA protest
changelogs?
Yes, I remember. However, if I saw a BK changeset called "Security fixes"
or "Security fixes -- details censored in accordance with the US DMCA"
then it would obviously be a security patch worth looking at. So looking
at linux.bkbits.net would still be an improvement over looking at a raw
patch with everything combined (which is what was the complaint was
about).

-Barry K. Nathan <***@pobox.com>
Alan Cox
2005-01-13 15:36:52 UTC
Permalink
Post by Florian Weimer
This is the exception. Usually, changelogs are cryptic, often
deliberately so. Do you still remember Alan's DMCA protest
changelogs?
They were not cryptic, just following the law to the point it claimed
neccessary....

That aside right now because Linus doesn't give us heads up we vendor
spend our time scanning all Linus' diffs and playing spot the security
fix because we know the bad guys do the same, and they are rather good
at it. Its useful anyway - eg its how we found that base kernels have
broken AX.25, and several other patches got tagged for immediate revert
in the -ac tree (and of course reported back upstream to l/k) but its a
pain to have to do it this way.

Having a list that fed such notices on to vendor-sec with a date fixed
by them is a real possible improvement - thats how we work with many
other projects. I also don't see any reason that Linus or Andrew
wouldn't be able to become a CAN issuing authority for security
advisories.

Alan
Marek Habersack
2005-01-13 19:25:51 UTC
Permalink
Post by Barry K. Nathan
Post by Marek Habersack
archived mail message or a webpage with the patch. Hoping he'll find the
fixes in the vendor kernels, he goes to download source packages from SuSe,
RedHat or Trustix, Debian, Ubuntu, whatever and discovers that it is as easy
to find the patch there as it is to fish it out of the vanilla kernel patch
for the new version. Frustrating, isn't it? Not to mention that he might
http://linux.bkbits.net is your friend.
I know about that, but many people don't.
Post by Barry K. Nathan
Each patch (including security fixes) in the mainline kernels (2.4 and
2.6) appears there as an individual, clickable link with a description
(e.g. "1.1551 Paul Starzetz: sys_uselib() race vulnerability
(CAN-2004-1235)").
If other patches have gone in since then, you may have to scroll through
a (short-form) changelog. However, it's still less frustrating than the
scenario you portray.
Less frustrating, yes, safer, not even slightly. You are still left on the
thin ice precisely the moment you are notified about the vulnerability (when
it goes public). Those not being members of vendor-sec still don't have the
privilege to know about the vulnerability ahead of time, before it goes
"officially" public. Besides, I know a few people who administer linux
machines who don't know what bkbits.net is, and they don't have to. There
should be a single place, a webpage which you can visit (or get an rss feed
of) and be sure you will be among the first to know about a vulnerability
(yes, I know about the CIA feeds, but this is still not the real thing,
IMHO).

regards,

marek
Christoph Hellwig
2005-01-13 19:35:24 UTC
Permalink
2.6.9 for example went out with known holes and broken AX.25 (known)
2.6.10 went out with the known holes mostly fixed but memory corrupting
bugs, AX.25 still broken and the wrong fix applied for the smb holes so
SMB doesn't work on it
XFS on 2.6.10 does work.
freudian typo, should have been smbfs as it should be obvious for the
context I replied to.
Depends on your definition of 'work'.
It oopses under load with NFS very easily,
Do you have a bugreport?
Dave Jones
2005-01-13 19:59:02 UTC
Permalink
Post by Christoph Hellwig
2.6.9 for example went out with known holes and broken AX.25 (known)
2.6.10 went out with the known holes mostly fixed but memory corrupting
bugs, AX.25 still broken and the wrong fix applied for the smb holes so
SMB doesn't work on it
XFS on 2.6.10 does work.
freudian typo, should have been smbfs as it should be obvious for the
context I replied to.
The smbfs breakage depended on what server you were using it seemed.
For a lot of folks it broke horribly.
Post by Christoph Hellwig
Depends on your definition of 'work'.
It oopses under load with NFS very easily,
Do you have a bugreport?
There are number of XFS related issues in Red Hat bugzilla.
As its not something we actively support, they've not got a lot
of attention. Some of them are quite old (dating back to 2.6.6 or so)
so may already have got fixed.

We've also seen a few reports on Fedora mailing lists.

Dave
Alan Cox
2005-01-13 18:55:42 UTC
Permalink
Post by Christoph Hellwig
freudian typo, should have been smbfs as it should be obvious for the
context I replied to.
It works in some situations but not others. Chuck Ebbert fixed this but
its never gotten upstream, although I think Andrew was now looking at
it.
Marek Habersack
2005-01-13 19:36:37 UTC
Permalink
Post by Marek Habersack
That might be, but note one thing: not everybody runs vendor kernels (for various
reasons). Now see what happens when the super-secret vulnerability (with
vendor fixes) is described in an advisory. A person managing a park of machines
(let's say 100) with custom, non-vendor, kernels suddenly finds out that they
have a buggy kernel and 100 machines to upgrade while the exploit and the
Those running 2.4 non-vendor kernels are just fine because Marcelo
chooses to work with vendor-sec while Linus chooses not to. I choose to
work with vendor-sec so generally the -ac tree is also fairly prompt on
fixing things.
That's fine, but if one isn't on vendor-sec, they are still out in the cold
until the vulnerability with an embargo is announced - at which point all
the vendors are ready, but those with non-vendor kernels are in for an
unpleasant surprise. And as for 2.4, yes, Marcelo does a good job applying
the fixes asap, but that's not helping. If one runs (as I wrote) a kernel
with custom code inside, tux and, say, grsecurity - and it's not the latest
2.4 kernel - he still needs to backport the fixes and make sure they work
fine with is custom code, all that in a great hurry. Somebody suggested
here that perhaps there could be a version of a security fix released for
X past kernel versions (2? 3?) if it doesn't apply cleanly to them. That
would be a great help along with earlier notification of a problem - not in
the way it is done with vendor-sec where you have to wear a pointy hat and
a beard to be accepted as a member. It's not that I'm whining or bitching,
hell no, I just think it would be more fair if everybody was treated the
same - vendors, non-vendors, bad guys, all alike.
Given that base 2.6 kernels are shipped by Linus with known unfixed
security holes anyone trying to use them really should be doing some
careful thinking. In truth no 2.6 released kernel is suitable for
anything but beta testing until you add a few patches anyway.
2.6.9 for example went out with known holes and broken AX.25 (known)
2.6.10 went out with the known holes mostly fixed but memory corrupting
bugs, AX.25 still broken and the wrong fix applied for the smb holes so
SMB doesn't work on it
I still think the 2.6 model works well because its making very good
progress and then others are doing testing and quality management on it.
Linus is doing the stuff he is good at and other people are doing the
stuff he doesn't.
That change of model changes the security model too however.
yes, definitely. IMHO, it enforces prompt and open security advisory/patch
releases, just as Linus proposed (with the limited embargo). Of course, one
can just take a released vendor kernel, patch it with their custom code and
compile the way they see it fit, but it's not in any way faster or better
than backporting the fixes to your own kernel.

regards,

marek
Christoph Hellwig
2005-01-13 19:25:12 UTC
Permalink
2.6.9 for example went out with known holes and broken AX.25 (known)
2.6.10 went out with the known holes mostly fixed but memory corrupting
bugs, AX.25 still broken and the wrong fix applied for the smb holes so
SMB doesn't work on it
XFS on 2.6.10 does work. The patches you had in earlier -ac made it
not work.
Dave Jones
2005-01-13 19:33:56 UTC
Permalink
2.6.9 for example went out with known holes and broken AX.25 (known)
2.6.10 went out with the known holes mostly fixed but memory corrupting
bugs, AX.25 still broken and the wrong fix applied for the smb holes so
SMB doesn't work on it
XFS on 2.6.10 does work.
Depends on your definition of 'work'.
It oopses under load with NFS very easily, though that's not helped
with 4K stacks.

Dave
Florian Weimer
2005-01-13 08:23:26 UTC
Permalink
Post by Linus Torvalds
So I think the whole vendor-sec thing is not helping users at all, it's
purely a "vendor embarassment" thing.
At least vendor-sec serves as a candidate naming authority for CVE,
and makes sure that the distributors use the same set of CANs in their
advisories. For users, this is an important step forward, because
there is no other way to tell if vendor A is fixing the same problem
as vendor B, at least for end users.

In the past, the kernel developers (including you) supported the
vendor-sec process by not addressing security issues in official
kernels in a timely manner, and (what's far worse from a user point of
view) silently fixing security bugs in new releases, probably because
some vendor kernels weren't fixed yet. Especially the last point
doesn't help users.
Kristofer T. Karas
2005-01-13 16:00:48 UTC
Permalink
Post by Linus Torvalds
So I'd not personally mind some _totally_ open list. No embargo at all, no
limits on who reads it. The more, the merrier. However, I think my
personal preference is pretty extreme in one end
I'm tipping my security hat to Linus (and somewhat away from RFPolicy)
on this one. Keeping a large organization free from viruses and malware
becomes increasingly entertaining the more "day zero" variants there
are. And recently, we've seen a lot for the windoze platform here; at
least one major anti-virus player thanks us for sending them infected
executables to analyze. Waiting for some embargo to allow a researcher
to claim credit just does not work. We spend all of our time swatting
flies, waiting for a vendor fix; yet a disclose-without-delay
quick-and-dirty fix would have saved so many staff hours.
Post by Linus Torvalds
So it's embarrassing to everybody if the kernel.org kernel has a security
hole for longer than vendor kernels, but at the same time, most _users_
run vendor kernels anyway
Not here! :-) All of my security infrastructure runs kernel.org
kernels. (I don't want any vendor "goodies" hidden in places I don't
know about.) I punch a button on my heavily-hacked Slackware boxen, and
the latest kernel, the latest internet-facing servers, the latest
critical libraries are automatically downloaded, compiled and installed
whenever newer version numbers exist. Time to a patched system from
when the author creates a patch is measured in hours; I compare that to
the day(s) or weeks I can wait for a vendor to get around to doing the
same thing.

Kris
Marcelo Tosatti
2005-01-12 17:42:03 UTC
Permalink
Post by Linus Torvalds
Post by Marcelo Tosatti
How you feel about having short fixed time embargo's (lets say, 3 or 4 days) ?
Please realize that I don't have any problem with a short-term embargo per
se, what I have problems with is the _politics_ that it causes. For
example, I do _not_ want this to become a
"vendor-sec got the information five weeks ago, and decided to embargo
until day X, and then because they knew of the 4-day policy of the
kernel security list, they released it to the kernel security list on
day X-4"
See? That is playing politics with a security list. That's the part I
don't want to have anything to do with. If somebody did that to me, I'd
feel pissed off like hell, and I'd say "screw them".
An important thing is that Mr. Torvalds agrees with the embargo, which you never
did, you always applied corrections for security bugs without being concerned
about a disclosure date agreement (which you have your own reasons and arguments
for, OK).

That makes vendorsec/etc uncomfortable submitting the information to you.

Great to hear you think differently and is willing to agree on a reasonable
embargo period.

The kernel security list must be higher in hierarchy than vendorsec.

Any information sent to vendorsec must be sent immediately for the kernel
security list and discussed there.

I'm sure one week is enough for vendors to prepare updates, and I'm sure they
will be fine with it.
Post by Linus Torvalds
But in the absense of politics, I'd _happily_ have a self-imposed embargo
that is limited to some reasonable timeframe (and "reasonable" is
definitely counted in days, not weeks. And absolutely _not_ in months,
like apparently sometimes happens on vendor-sec).
We all agree there is no good reason to embargo a kernel bug for more than
one week, given that the fix is known and settled.
Post by Linus Torvalds
So if the embargo time starts ticking from _first_ report, I'd personally
be perfectly happy with a policy of, say "5 working days" (aka one week),
or until it was made public somewhere else.
IOW, if it was released on vendor-sec first, vendor-sec could _not_ then
try to time the technical list (unless they do so in a very timely manner
indeed).
I'm not saying that we'd _have_ to go public after five days. I'm saying
that after that, there would be nothing holding it back (but maybe the
technical discussion on how to _fix_ it is still on-going, and that might
make people just not announce it until they're ready).
Wonderful.
Alan Cox
2005-01-13 15:36:27 UTC
Permalink
Post by Marcelo Tosatti
The kernel security list must be higher in hierarchy than vendorsec.
Any information sent to vendorsec must be sent immediately for the kernel
security list and discussed there.
We cannot do this without the reporters permission. Often we get
material that even the list isn't allowed to directly see only by
contacting the relevant bodies directly as well. The list then just
serves as a "foo should have told you about issue X" notification.

If you are setting up the list also make sure its entirely encrypted
after the previous sniffing incident.

Alan
Florian Weimer
2005-01-13 17:52:56 UTC
Permalink
Post by Alan Cox
We cannot do this without the reporters permission. Often we get
material that even the list isn't allowed to directly see only by
contacting the relevant bodies directly as well. The list then just
serves as a "foo should have told you about issue X" notification.
If you are setting up the list also make sure its entirely encrypted
after the previous sniffing incident.
Others have had made good use of symmetric encryption with OpenPGP
(the CAST5 cipher seems most interoperable). New symmetric keys are
distributed twice per year, using the participants OpenPGP public
keys.

(There are also various implementations of reencrypting mailing lists,
but they cannot ensure end-to-end encryption.)
Marek Habersack
2005-01-13 19:42:46 UTC
Permalink
Post by Alan Cox
Post by Marcelo Tosatti
The kernel security list must be higher in hierarchy than vendorsec.
Any information sent to vendorsec must be sent immediately for the kernel
security list and discussed there.
We cannot do this without the reporters permission. Often we get
I think I don't understand that. A reporter doesn't "own" the bug - not the
copyright, not the code, so how come they can own the fix/report?
Post by Alan Cox
material that even the list isn't allowed to directly see only by
contacting the relevant bodies directly as well. The list then just
serves as a "foo should have told you about issue X" notification.
This sounds crazy. I understand that this may happen with proprietary
software, or software that is made/supported by a company but otherwise opensource
(like OpenOffice, for instance), but the kernel?

regards,

marek
Chris Wright
2005-01-13 19:50:04 UTC
Permalink
Post by Marek Habersack
Post by Alan Cox
Post by Marcelo Tosatti
The kernel security list must be higher in hierarchy than vendorsec.
Any information sent to vendorsec must be sent immediately for the kernel
security list and discussed there.
We cannot do this without the reporters permission. Often we get
I think I don't understand that. A reporter doesn't "own" the bug - not the
copyright, not the code, so how come they can own the fix/report?
It's not about ownership. It's about disclosure and common sense.
If someone reports something to you in private, and you disclose it
publically (or even privately to someone else) without first discussing
that with them, you'll lose their confidence. Consequently they won't
be so kind to give you forewarning next time.
Post by Marek Habersack
Post by Alan Cox
material that even the list isn't allowed to directly see only by
contacting the relevant bodies directly as well. The list then just
serves as a "foo should have told you about issue X" notification.
This sounds crazy. I understand that this may happen with proprietary
software, or software that is made/supported by a company but otherwise opensource
(like OpenOffice, for instance), but the kernel?
Licensing is irrelevant. Like it or not, the person who is discovering
the bugs has some say in how you deal with the information. It's in our
best interest to work nicely with these folks, not marginalize them.

thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
Marek Habersack
2005-01-13 20:29:05 UTC
Permalink
Post by Chris Wright
Post by Marek Habersack
Post by Alan Cox
Post by Marcelo Tosatti
The kernel security list must be higher in hierarchy than vendorsec.
Any information sent to vendorsec must be sent immediately for the kernel
security list and discussed there.
We cannot do this without the reporters permission. Often we get
I think I don't understand that. A reporter doesn't "own" the bug - not the
copyright, not the code, so how come they can own the fix/report?
It's not about ownership. It's about disclosure and common sense.
If someone reports something to you in private, and you disclose it
publically (or even privately to someone else) without first discussing
that with them, you'll lose their confidence. Consequently they won't
be so kind to give you forewarning next time.
I understand that, but I don't see a point in holding the fixes back for the
majority of people (since the vendors on vendor-sec are a minority and I
suspect that more people run self-compiled kernels on their servers than the
vendor kernels, I might be wrong on that). If there is a list that's at
least half-open (i.e. invitation required, but no CV required :) then there
is no issue of confidence, is there? And with such list, everybody has
equal chances - bad, good and the ugly too. Maybe my logic is flawed, but
that's how I see it - the linux kernel is a piece of open code, accessible
to all, with all its features, bugs, flaws. So, if the code is open, the
reports about the code security/bugs should be as open, together with fixes,
from the day one of finding the bug. Otherwise, if we have the scenario when
the vendor-sec members are informed about a bug+fix 2 months earlier and the
vulnerability+fix are disclosed 2 months later, then this is creating a
situation where not everybody has equal chances of reacting to the bug. As I
wrote earlier, that puts the folks using non-vendor kernels way behind both
the vendors _and_ the bad guys - since the latter have both the
vulnerability, the fix _and_ (usually) the exploit (or they can come up with
it in a matter of hours). For me it's all about equal chances in reacting to
the security issues. Again, I might be totally wrong in my reasoning, feel
free to correct me.
Post by Chris Wright
Post by Marek Habersack
Post by Alan Cox
material that even the list isn't allowed to directly see only by
contacting the relevant bodies directly as well. The list then just
serves as a "foo should have told you about issue X" notification.
This sounds crazy. I understand that this may happen with proprietary
software, or software that is made/supported by a company but otherwise opensource
(like OpenOffice, for instance), but the kernel?
Licensing is irrelevant. Like it or not, the person who is discovering
the bugs has some say in how you deal with the information. It's in our
best interest to work nicely with these folks, not marginalize them.
It's not about marginalizing, because by requesting that their report is
kept secret for a while and known only to a small bunch of people, you could
say they are marginalizing us, the majority of people who use the linux
kernel (us - those who aren't on the vendor-sec list). It's, again IMHO,
about equal chances. More and more often it seems that security advisories
and releases are treated as an asset for security companies, not a common
good/knowledge. And that's pretty sad...

regards,

marek
Alan Cox
2005-01-13 19:41:10 UTC
Permalink
Post by Marek Habersack
I understand that, but I don't see a point in holding the fixes back for the
majority of people (since the vendors on vendor-sec are a minority and I
vendor-sec probably covers the majority of users

It covers
2.4
2.6-ac
Red Hat
SuSE
Debian
Gentoo
Mandrake
and many more including some of the BSD folk (a lot of user space bugs
are common)

2.6 base isn't covered because Linus has differing views.
Post by Marek Habersack
suspect that more people run self-compiled kernels on their servers than the
vendor kernels, I might be wrong on that). If there is a list that's at
I'd say you are very very wrong from the data I have access too,
probably of the order of 1000:1 wrong or more.
Post by Marek Habersack
Post by Chris Wright
Licensing is irrelevant. Like it or not, the person who is discovering
the bugs has some say in how you deal with the information. It's in our
best interest to work nicely with these folks, not marginalize them.
It's not about marginalizing, because by requesting that their report is
kept secret for a while and known only to a small bunch of people, you could
say they are marginalizing us, the majority of people who use the linux
kernel (us - those who aren't on the vendor-sec list). It's, again IMHO,
They chose to. A lot of people report bugs directly to Linus too or to
the lists or to full-disclosure depending upon their view. The folks who
report bugs in private either to Linus or to vendor-sec or maintainers
or whoever generally believe that the bad guys can move faster and cause
a lot of damage if a bug isn't fixed before announce.

Thats based on the observation that
- the bad guys have to move a small exploit versus a large binary
- the exploit doesn't have to pass quality assurance, you just write
more
- they can automate the attack tools very effectively

So the non-disclosure argument is perhaps put as "equality of access at
the point of discovery means everyone gets rooted.". And if you want a
lot more detail on this read papers on the models of security economics
- its a well studied field.

Alan
Marek Habersack
2005-01-13 21:02:29 UTC
Permalink
On Thu, Jan 13, 2005 at 07:41:10PM +0000, Alan Cox scribbled:
[snip]
Post by Alan Cox
Post by Marek Habersack
suspect that more people run self-compiled kernels on their servers than the
vendor kernels, I might be wrong on that). If there is a list that's at
I'd say you are very very wrong from the data I have access too,
probably of the order of 1000:1 wrong or more.
I stand corrected then, you have access to much better sources than I do, no
doubts.
Post by Alan Cox
Post by Marek Habersack
Post by Chris Wright
Licensing is irrelevant. Like it or not, the person who is discovering
the bugs has some say in how you deal with the information. It's in our
best interest to work nicely with these folks, not marginalize them.
It's not about marginalizing, because by requesting that their report is
kept secret for a while and known only to a small bunch of people, you could
say they are marginalizing us, the majority of people who use the linux
kernel (us - those who aren't on the vendor-sec list). It's, again IMHO,
They chose to. A lot of people report bugs directly to Linus too or to
the lists or to full-disclosure depending upon their view. The folks who
report bugs in private either to Linus or to vendor-sec or maintainers
or whoever generally believe that the bad guys can move faster and cause
They can still move faster when the vulnerability (and the fixed vendor
kernels) are released. The people who are to install the kernels usually
cannot act immediately, so if the bad guys have somebody on target, they
will root them anyway. I see no difference here to a model of totally open
disclosure list.
Post by Alan Cox
a lot of damage if a bug isn't fixed before announce.
Again, it works for vendors, not for end users, IMO.
Post by Alan Cox
Thats based on the observation that
- the bad guys have to move a small exploit versus a large binary
delayed release doesn't change that. One still needs to download and deploy
the kernels (possibly compiling them if they have to).
Post by Alan Cox
- the exploit doesn't have to pass quality assurance, you just write
more
again, closed mailing lists don't change that
Post by Alan Cox
- they can automate the attack tools very effectively
ditto
Post by Alan Cox
So the non-disclosure argument is perhaps put as "equality of access at
the point of discovery means everyone gets rooted.". And if you want a
lot more detail on this read papers on the models of security economics
- its a well studied field.
Theory is fine, practice is that the closed disclosure list changes matters
for a vaste minority of people - those who are to install the fixed kernels
are in perfectly the same situation they would be in if there was a fully
open disclosure list.

all of this is IMHO, of course - cannot stress that more :)

best regards,

marek
Linus Torvalds
2005-01-13 21:22:28 UTC
Permalink
Post by Alan Cox
So the non-disclosure argument is perhaps put as "equality of access at
the point of discovery means everyone gets rooted.". And if you want a
lot more detail on this read papers on the models of security economics
- its a well studied field.
or in other words: you can write an exploit faster than y ou can write
the fix, so the thing needs delaying until a fix is available to make it
more equal.
That's a bogus argument, and anybody who looks at MS practices and
is at all honest with himself should see it as a bogus argument.

I think MS _still_ to this day will stand up and say that they have had no
zero-day exploits. Exactly because they count "zero-day" as the day things
get publically released. Never mind that exploits where (and are)
privately available on cracking networks for months before. They just
haven't been publically released BECAUSE EVERYBODY IS PARTICIPATING IN THE
GAME.

The written rule in this community is "no honest person will report a bug
before its time is through". Which automatically means that you get
branded as being "bad" if you ever rock the boat. That's a piece of
bullshit, and anybody who doesn't admit it is being totally dishonest with
himself.

Me, I consider that to be dirty.

Does Linux have a better track record than MS? Damn right it does. We've
had fewer problems, and I think there are more people out there standing
up for what's right anyway. Less PR people deathly afraid of rockign the
boat. Better technology, and fewer horrid design mistakes.

But that doesn't mean that all the same things aren't true for vendor-sec
that are true for MS. They are just bad to a (much, I hope) smaller
degree.

So instead, let's look at FACTS:

- fixing a security bug is almost always much easier than writing an
exploit. Arjan, your argument simply isn't true except for the worst
possible fundamental design issues. You should know that. In the case
of "uselib()", it was literally four lines of obvious code - all the
rest was just to make sure that there weren't any other cases like that
lurking around.

- There are more white-hats around than black-hats, but they are often
less "driven" and motivated. Now _that_, I would argue, is the real
problem with early disclosure - motivation. The people really
motivated to find the bugs are the people who are also motivated to
mis-use them. However, vendor-sec and "the game" just makes it more
worth-while for security firms to participate in it - it gives them the
"good PR" thing. And how much can you trust the "gray hats"?

And this is why I believe vendor-sec is part of the problem. If you don't
see that, then you're blinding yourself to the downsides, and trying to
only look at the upsides.

Are there advantages and upsides? Yes. Are there disadvantages?
Indubitably. And anybody who disregards the disadvantages as "inevitable"
is not really interested in fixing the game.

Linus

Dave Jones
2005-01-13 20:03:08 UTC
Permalink
Post by Marek Habersack
Post by Alan Cox
Post by Marcelo Tosatti
The kernel security list must be higher in hierarchy than vendorsec.
Any information sent to vendorsec must be sent immediately for the kernel
security list and discussed there.
We cannot do this without the reporters permission. Often we get
I think I don't understand that. A reporter doesn't "own" the bug - not the
copyright, not the code, so how come they can own the fix/report?
Security researchers are an odd bunch. They're very attached to their
bugs in the sense they want to be the ones who get the glory for
having reported it.

As soon as bugs start getting forwarded around between lists, the
potential for leaks increases greatly. The recent fiasco surrounding
one of the isec.pl holes was believed to have been caused due to
someone 'sniffing upstream' for example.

When issues get leaked, the incentive for a researcher to use the
same process again goes away, which hurts us. Basically, trying
to keep them happy is in our best interests.

Dave
Alan Cox
2005-01-13 19:27:42 UTC
Permalink
In fact, right now we seem to encourage even people who do _not_
necessarily want the delay and secrecy to go over to vendor-sec, just
because the vendor-sec people are clearly arguing even against
alternatives.
If someone posts something to vendor-sec that says "please tell Linus"
we would. If someone posts to vendor-sec saying "I posted this to
linux-kernel here's a heads up" its useful. If you are uber cool elite 0
day disclosure weenie you post to full-disclosure or bugtraq. There are
alternatives 8)
Which is something I do not understand. The _apologia_ for vendor-sec is
absolutely stunning. Even if there are people who want to only interface
with a fascist vendor-sec-style absolute secrecy list, THAT IS NOT AN
EXCUSE TO NOT HAVE OPEN LISTS IN _ADDITION_!
I'm all for an open list too. Its currently called linux-kernel. Its
full of such reports, and most of them are about new code or trivial
holes where secrecy is pointless. Having an open linux-security list so
they don't get missed as the grsecurity stuff did (and until I got fed
up of waiting the coverity stuff did) would help because it would make
sure that it didn't get buried in the noise.

Similarly it would help if you are sneaking security fixes in (as you do
regularly) you actually told the vendors about them.

Alan
Linus Torvalds
2005-01-13 21:03:22 UTC
Permalink
Post by Alan Cox
I'm all for an open list too. Its currently called linux-kernel. Its
full of such reports, and most of them are about new code or trivial
holes where secrecy is pointless. Having an open linux-security list so
they don't get missed as the grsecurity stuff did (and until I got fed
up of waiting the coverity stuff did) would help because it would make
sure that it didn't get buried in the noise.
Yes. But I know people send private emails because they don't want to
create a scare, so I think we actually have several levels of lists:

- totally open: linux-kernel, or an alternative with lower noise

We've kind of got this, but things get lost in the noise, and "white
hat" people don't like feeling guilty about announcing things.

- no embargo, no rules, but "private" in the sense that it's supposed to
be for kernel developers only or at least people who won't take
advantage of it.

_I_ think this is the one that makes sense. No hard rules, but private
enough that people won't feel _guilty_ about reporting problems. Right
now I sometimes get private email from people who don't want to point
out some local DoS or similar, and that can certainly get lost in the
flow.

- _short_ embargo, for kernel-only. I obviously believe that vendor-sec
is whoring itself for security firms and vendors. I believe there would
be a place for something with stricter rules on disclosure.

- vendor-sec. The place where you can play any kind of games you want.

It's not a black-and-white thing. I refuse to believe that most security
problems are found by people without any morals. I believe that somewhere
in the middle is where most people feel most comfortable.

Linus
Marek Habersack
2005-01-13 20:32:06 UTC
Permalink
Post by Dave Jones
Post by Marek Habersack
Post by Alan Cox
Post by Marcelo Tosatti
The kernel security list must be higher in hierarchy than vendorsec.
Any information sent to vendorsec must be sent immediately for the kernel
security list and discussed there.
We cannot do this without the reporters permission. Often we get
I think I don't understand that. A reporter doesn't "own" the bug - not the
copyright, not the code, so how come they can own the fix/report?
Security researchers are an odd bunch. They're very attached to their
bugs in the sense they want to be the ones who get the glory for
having reported it.
Let them have it! We can even chip in to run banner adds on freshmeat with
their name on it, I'm all game for that. Or create an RFC-like archive of
the vulnerabilities, ruled by the same rules - no changes after publishing.
Their names will be circulating around the internet forever.
Post by Dave Jones
As soon as bugs start getting forwarded around between lists, the
potential for leaks increases greatly. The recent fiasco surrounding
one of the isec.pl holes was believed to have been caused due to
someone 'sniffing upstream' for example.
I think it would be a non-issue if there was no drive towards keeping it
secret at all cost. It would be out in the open, nothing else, nothing more.
Post by Dave Jones
When issues get leaked, the incentive for a researcher to use the
same process again goes away, which hurts us. Basically, trying
to keep them happy is in our best interests.
You've said they want glory, we can give it to them in many ways without
keeping their discoveries secret.

best regards,

marek
Linus Torvalds
2005-01-13 20:10:33 UTC
Permalink
Post by Dave Jones
When issues get leaked, the incentive for a researcher to use the
same process again goes away, which hurts us. Basically, trying
to keep them happy is in our best interests.
Not so.

_balancing_ their happiness with our needs is what's in our best
interests. Yes, we should encourage them to tell us, but totally bending
over backwards is definitely the wrong thing to do.

In fact, right now we seem to encourage even people who do _not_
necessarily want the delay and secrecy to go over to vendor-sec, just
because the vendor-sec people are clearly arguing even against
alternatives.

Which is something I do not understand. The _apologia_ for vendor-sec is
absolutely stunning. Even if there are people who want to only interface
with a fascist vendor-sec-style absolute secrecy list, THAT IS NOT AN
EXCUSE TO NOT HAVE OPEN LISTS IN _ADDITION_!

In other words, I really don't understand this total subjugation by people
to the vendor-sec mentaliy. It's a disease, I tell you.

Linus
Alan Cox
2005-01-13 19:19:45 UTC
Permalink
Post by Marek Habersack
Post by Alan Cox
We cannot do this without the reporters permission. Often we get
I think I don't understand that. A reporter doesn't "own" the bug - not the
copyright, not the code, so how come they can own the fix/report?
They own the report. Who owns it is kind of irrelevant. If we publish it
when they don't want it published then next time they'll send it to
full-disclosure or worse still just share an exploit with the bad guys.
So unless we get really stoopid requests we try not to annoy people -
hole reporting is a volunatry activity
Post by Marek Habersack
Post by Alan Cox
material that even the list isn't allowed to directly see only by
contacting the relevant bodies directly as well. The list then just
serves as a "foo should have told you about issue X" notification.
This sounds crazy. I understand that this may happen with proprietary
software, or software that is made/supported by a company but otherwise opensource
(like OpenOffice, for instance), but the kernel?
Its not uncommon. Not all security bodies (especially government
security agencies) trust vendor-sec directly, only some members on the
basis of their own private auditing/background checks.

Alan
Marek Habersack
2005-01-13 20:44:15 UTC
Permalink
Post by Alan Cox
Post by Marek Habersack
Post by Alan Cox
We cannot do this without the reporters permission. Often we get
I think I don't understand that. A reporter doesn't "own" the bug - not the
copyright, not the code, so how come they can own the fix/report?
They own the report. Who owns it is kind of irrelevant. If we publish it
when they don't want it published then next time they'll send it to
full-disclosure or worse still just share an exploit with the bad guys.
So unless we get really stoopid requests we try not to annoy people -
hole reporting is a volunatry activity
Sounds a bit backwards to me. It's like surrendering to a guy who attacks you
on the street "because he's got a knife and I don't". There is some sense in
it, but that way you're putting yourself in a position of a victim. The
reporters... ok, they own the report, but do they own the information?
Post by Alan Cox
Post by Marek Habersack
Post by Alan Cox
material that even the list isn't allowed to directly see only by
contacting the relevant bodies directly as well. The list then just
serves as a "foo should have told you about issue X" notification.
This sounds crazy. I understand that this may happen with proprietary
software, or software that is made/supported by a company but otherwise opensource
(like OpenOffice, for instance), but the kernel?
Its not uncommon. Not all security bodies (especially government
security agencies) trust vendor-sec directly, only some members on the
basis of their own private auditing/background checks.
So it sounds that we, the men-in-the-crowd are really left out in the crowd,
people who are affected the most by the issues. Since the vendors are not
affected by the bugs (playing a devil's advocate here), since they fix them
for their machines as they appear, way before they get public.

best regards,

marek
Marcelo Tosatti
2005-01-13 17:22:31 UTC
Permalink
Post by Alan Cox
Post by Marcelo Tosatti
The kernel security list must be higher in hierarchy than vendorsec.
Any information sent to vendorsec must be sent immediately for the kernel
security list and discussed there.
We cannot do this without the reporters permission. Often we get
material that even the list isn't allowed to directly see only by
contacting the relevant bodies directly as well. The list then just
serves as a "foo should have told you about issue X" notification.
Well the reporters, and vendorsec, have to be aware that the
"kernel security list" is the main discussion point of kernel security issues.

If the embargo period is reasonable for vendors to prepare their updates and
do necessary QA, there should be no need for kernel issues to be coordinated
(and embargoed) on vendorsec anymore. Does it make sense?
Of course vendorsec gets informed of what is happening at "kernel security list".

The main reason for reporters to require "permission" to spread the information
is because they want make a PR of their discovery, yes?

In that case they should be aware that submitting to vendorsec means submitting
to kernel security, and that means X days of embargo period.
Post by Alan Cox
If you are setting up the list also make sure its entirely encrypted
after the previous sniffing incident.
Definately, I asked Chris about it...
Rik van Riel
2005-01-13 03:37:55 UTC
Permalink
Post by Marcelo Tosatti
The only reason for this is to have "time for the vendors to catch up",
which can be defined by the kernel security office. Nothing more - no
vendor politics involved.
There are other good reasons, too. One could be:

"Lets not make this security bug public on christmas eve,
because many system administrators won't get around to
applying patches, while the script kiddies have lots of
time over their christmas holidays."

IMHO it will be good to coordinate things like this, based on
common sense, and trying to minimise the impact on users of
the software. I do agree with Linus' "no politics" point,
though ;)
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan
Marcelo Tosatti
2005-01-12 15:06:12 UTC
Permalink
Hi Chris!
Post by Chris Wright
This same discussion is taking place in a few forums. Are you opposed to
creating a security contact point for the kernel for people to contact
with potential security issues? This is standard operating procedure
for many projects and complies with RFPolicy.
http://www.wiretrip.net/rfp/policy.html
Right now most things come in via 1) lkml, 2) maintainers, 3) vendor-sec.
It would be nice to have a more centralized place for all of this
information to help track it, make sure things don't fall through
the cracks, and make sure of timely fix and disclosure.
I very much like the idea and I also think a "official" list of kernel security issues and
respective fixes is very much required, since not every Linux distribution is supposed
to have kernel developers working for them, going through the whole changelogs
looking for security issues, which is just silly.

Disclosing and bookkeeping of security issues is a job of the Linux kernel team.

Alan used to list down security fixes between each v2.2 release, v2.4 has never
had such an official list (I'm trying to write CAN numbers on the changelogs lately),
neither v2.6. Its not a practical thing for Linus/Andrew to do, its a lot of
work.

It would be interesting to have all developers to know about such initiative
and have them send their security fixes to be logged and disclosed - its obviously
impossible for you to read all changes in the kernel. And have Linus/Andrew
advocate in favour of it.

IMO such initiative needs to be known by all contributors for
it to be effective.
Post by Chris Wright
In addition, I think it's worth considering keeping the current stable
kernel version moving forward (point releases ala 2.6.x.y) for critical
(mostly security) bugs. If nothing else, I can provide a subset of -ac
patches that are only that.
Yes, -ac has been playing that role. It is general consensus that
such point releases are required.

Linus doesnt do it because it is too much extra work him (and he is focused
on other things), glad you have stepped up.
Post by Chris Wright
I volunteer to help with _all_ of the above. It's what I'm here for.
Use me, abuse me ;-)
You've been doing a lot of security work/auditing in the kernel for a long time,
which fits the job position nicely.

I'm willing to help.
Chris Wright
2005-01-12 18:49:36 UTC
Permalink
Post by Marcelo Tosatti
Post by Chris Wright
Right now most things come in via 1) lkml, 2) maintainers, 3) vendor-sec.
It would be nice to have a more centralized place for all of this
information to help track it, make sure things don't fall through
the cracks, and make sure of timely fix and disclosure.
I very much like the idea and I also think a "official" list of kernel security issues and
respective fixes is very much required, since not every Linux distribution is supposed
to have kernel developers working for them, going through the whole changelogs
looking for security issues, which is just silly.
Disclosing and bookkeeping of security issues is a job of the Linux kernel team.
Yes, I agree.
Post by Marcelo Tosatti
Alan used to list down security fixes between each v2.2 release, v2.4 has never
had such an official list (I'm trying to write CAN numbers on the changelogs lately),
neither v2.6. Its not a practical thing for Linus/Andrew to do, its a lot of
work.
It would be interesting to have all developers to know about such initiative
and have them send their security fixes to be logged and disclosed - its obviously
impossible for you to read all changes in the kernel. And have Linus/Andrew
advocate in favour of it.
IMO such initiative needs to be known by all contributors for
it to be effective.
Indeed, it would be most effective as a collective effort. Of course,
we'll never make 100%, but we could do better than now.
Post by Marcelo Tosatti
Post by Chris Wright
In addition, I think it's worth considering keeping the current stable
kernel version moving forward (point releases ala 2.6.x.y) for critical
(mostly security) bugs. If nothing else, I can provide a subset of -ac
patches that are only that.
Yes, -ac has been playing that role. It is general consensus that
such point releases are required.
Linus doesnt do it because it is too much extra work him (and he is focused
on other things), glad you have stepped up.
Post by Chris Wright
I volunteer to help with _all_ of the above. It's what I'm here for.
Use me, abuse me ;-)
You've been doing a lot of security work/auditing in the kernel for a long time,
which fits the job position nicely.
I'm willing to help.
Great, thanks!
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
Florian Weimer
2005-01-12 19:43:07 UTC
Permalink
Post by Chris Wright
This same discussion is taking place in a few forums. Are you opposed to
creating a security contact point for the kernel for people to contact
with potential security issues?
Would this be anything but a secretary in front of vendor-sec?
Post by Chris Wright
http://www.wiretrip.net/rfp/policy.html
Right now most things come in via 1) lkml, 2) maintainers, 3) vendor-sec.
It would be nice to have a more centralized place for all of this
information to help track it, make sure things don't fall through
the cracks, and make sure of timely fix and disclosure.
You mean, like issuing *security* *advisories*? *gasp*

I think this is an absolute must (and we are certainly not alone!),
but this project does not depend on the way the initial initial
contact is handled.
Post by Chris Wright
+ If it is a security bug, please copy the Security Contact listed
+in the MAINTAINERS file. They can help coordinate bugfix and disclosure.
If this is about delayed disclosure, a few more details are required,
IMHO. Otherwise, submitters will continue to use their
well-established channels. Most people hesitate before posting stuff
they view sensitive to a mailing list.
Chris Wright
2005-01-12 22:46:09 UTC
Permalink
Post by Florian Weimer
Post by Chris Wright
This same discussion is taking place in a few forums. Are you opposed to
creating a security contact point for the kernel for people to contact
with potential security issues?
Would this be anything but a secretary in front of vendor-sec?
Yes, it'd be the primary contact for handling kernel security issues.
Handling vendor coordination is only one piece of a handling security
issue.
Post by Florian Weimer
Post by Chris Wright
http://www.wiretrip.net/rfp/policy.html
Right now most things come in via 1) lkml, 2) maintainers, 3) vendor-sec.
It would be nice to have a more centralized place for all of this
information to help track it, make sure things don't fall through
the cracks, and make sure of timely fix and disclosure.
You mean, like issuing *security* *advisories*? *gasp*
Yes, although we're not even tracking things well, let alone advisories.
Post by Florian Weimer
I think this is an absolute must (and we are certainly not alone!),
but this project does not depend on the way the initial initial
contact is handled.
Post by Chris Wright
+ If it is a security bug, please copy the Security Contact listed
+in the MAINTAINERS file. They can help coordinate bugfix and disclosure.
If this is about delayed disclosure, a few more details are required,
IMHO. Otherwise, submitters will continue to use their
well-established channels. Most people hesitate before posting stuff
they view sensitive to a mailing list.
Yes, that's the point of coordinating the fix _and_ the disclosure.

thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
Hubert Tonneau
2005-01-12 20:49:55 UTC
Permalink
Where is 2.6.10.1 with the security fix only ?

I have not yet finished to deal with the TCP troubles moving to 2.6.10 generated
on my production server, and now, I should apply another large set of mainly
untested patches just to fill the security hole. This just cannot be done in
a fiew days because on many organisations, the new kernel has to pass several
days on secondary servers before reaching the main ones.

Now assuming that I have other production servers still running older kernels,
I have no way to get the simple fix from kernel.org and backport it to 2.6.8
and 2.6.9, unless I'm a kernel fulltime worker that reads all messages on kernel
mailing list.

Basically, you are currently leaving non distribution related users alone in the
cold and this is really really bad for the confidence we have in Linux,
so please publish a 2.6.10.1 with the short term solution to fix the hole.
Of course this does not prevent to publish 2.6.10.2 when you found a better
solution, or use a different fix in 2.6.11 since they are not based on 2.6.10.1

Regards,
Hubert Tonneau


PS: I believe that it would also be a very good idea, since Linux is now
expected to be a mature organisation, to automatically publish 2.6.x.y new holes
only fix patch for each stable kernel that has been released less than a year ago.
This would enable smoother upgrade of highly important production servers.
Chris Wright
2005-01-13 17:29:51 UTC
Permalink
Post by Hubert Tonneau
Basically, you are currently leaving non distribution related users alone in the
cold and this is really really bad for the confidence we have in Linux,
so please publish a 2.6.10.1 with the short term solution to fix the hole.
Of course this does not prevent to publish 2.6.10.2 when you found a better
solution, or use a different fix in 2.6.11 since they are not based on 2.6.10.1
I agree (it was part of my original mail), and would like to remedy this.
For now, you can pick up fixes from -ac tree.
Post by Hubert Tonneau
Regards,
Hubert Tonneau
PS: I believe that it would also be a very good idea, since Linux is now
expected to be a mature organisation, to automatically publish 2.6.x.y new holes
only fix patch for each stable kernel that has been released less than a year ago.
This would enable smoother upgrade of highly important production servers.
Not sure about that (it's quite some work), but at least the _current_
stable release version.

thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
Continue reading on narkive:
Loading...