Re: PD-0101: Level of Detail Necessary for Assurance Requirements on Third Party Products
A levelezőm azt hiszi, hogy Daniel P. Faigin a következőeket írta:
> On Wed, 10 Mar 2004 16:09:37 -0500 (EST), Magosányi Árpád <email@example.com> said:
> > -Everything off the TOE can be off the radar, as we will check it to the
> > necessary extent in the evaluation tests indirectly. The previous
> > statement is of course false: it is yet to be decided what assurance
> > measures to be used against these innocent entities, and in what
> > circumstances. I would recommend frightening 'em with configuration
> > management, and kill 'em all with ATE_IND as required for the delivered
> > TOE in a relatively low EAL (3, maybe).
> I'm not quite sure what you are saying. As the CC stands currently, if it is
> in the TOE or TSF (depending on the assurance requirement), it is subject to
> the requirement. No exceptions. Note that some components may not be visible
> in the design depending on the EAL (such as screws for the motherboard on a
> database), and some don't provide SFRs.
I am talking here about things outside the TOE, for which the CC does
not define what is the required assurance, or does it only very vaguely.
> > -Everything which cannot move can be off the radar. I believe that this
> > can be measured by the participation in the machine state space, and
> > be given in the number of bits.
> Huh? Can you translate this for me?
Here I am talking about things inside the TOE, but too little to be noticed
(like the screws). I try to define the notion of "too little" with their
participation in the machine state space, which can be measured in bits.
Sorry for my poor English.
> >> And, of course, this raises the question of what really is the implementation
> >> representation. Is it the source code, in
> >> pick-your-favorite-high-level-language, or is it the output of the compiler.
> > Sorry, I thought that for software it is the source code. The whole open
> > source community is built around this fact. They would be very disappointed
> > to figure out otherwise:) Open objects community? It sounds like a trade
> > union of thiefs, not like the community of the brightest minds of the IT
> > industry.
> Actually, the only reason that we view source as the implementation is that is
> what we (as software engineers) can read. Would you be able to do an analysis
> of the assembly language? I probably couldn't. I couldn't do it on the raw
> machine code either.
> What this boils down to is that our implementation representation has a number
> of levels to it, and we don't acknowlege that in the CC. There is source,
> assembly, and machine code/executable. Of course, that's just software..... is
> the question easier or harder for hardware (actually, for hardware, the
> implementation representation is easier, it is the HLD/LLD that is harder).
I think that the source is the level we view not just because it is what
we can understand, but also because this is the TOE. Its correspondence
to RTL, assembly, or object code is outside the question of whether the
TOE itself is good enough: it is the question of whether the TOE
embedded in its environment is good enough.
It is not just the politically correct and compact approach, it is also
the approach which shows you the way out: you do need assurance for
components outside the TOE, but you should choose its measures correctly.
> Think this is a silly question? Think again. We're seeing an increasing number
> of products that are being written in Java, Python, and Perl. All languages
> that are translated to a portable representation and interpretated. What level
> is the implementation representation?
Think of a pure interpreter implementation for a language. You have the
source as the implementation requirement, and the interpreter as the
underlying abstract machine.
Now think of the compiler implementation of the same. It is the same
program. Having it interpreted or compiled is just a technical detail
which should not be seen in the conceptual level: it should not modify
our notion of TOE boundary.
> And think about this: IF the implementation representation is the source code
> (C++, C, Perl, Python, Java, Fortran, Ada, Cobol, PL/I, Pascal, Snobol, Lisp,
> Algol 68, etc.), then do we care about which machine the certificate covers?
> Is testing on one machine for which there is a compiler for that language
> equivalent to testing on them all, if the requisite libraries are there? If
> not, why not?
This is what CCIMB defines as "configuration management nightmare", but
I call it real world. In the real world you will not just use the
product in a different configuration than it is certified in, but also use
its security functions which are not certified. This is partly because
CC cannot yet handle the genetic variety of real life:)
The easiest part to tackle of this problem is the variety where the
expected result is the same: different compilers, libraries, processor
architectures which we expect to behave similarly: just test the TOE
when it is embedded in its actual environment, and you will see if
it is good enough. This is why I think that ATE_IND.3 should appear
The hardest part needs actual brain power: think about certain classes
of TOEs as tools providing security functions for other TOEs, design
and evaluate them accordingly, and don't forget some of their features
from the evaluation just because it would be too much money, and the
managers would not notice that only a quarter of the actual TOE is
> > Neglecting development tools is just as sinful as neglecting libraries,
> > but I hereby excuse you from the prosecution on the ground of lack of
> > noticeable danger to the society:) Flaws in evaluated products found
> > are very rarely can be traced back to this cause, I guess.
> Here you hit upon a key notion, that is not reflected well in the CC: Risk
> Assessment. We don't look at compilers because flaws can rarely be traced back
> to that cause. Similarly, we don't look at screws, or RAM chips, or logic
> gates, or buffer chips. We do look at CPUs.
> Back in the TCSEC days (after all, this is cc-cmt), the NCSC could do this
> risk assessment, and tell evaluators something was below the level of
> concern. In todays' international environment, a scheme cannot make that
> determination, because it might have impacts on MR or economic impacts on
> their labs. Such direction MUST come from the CC Project.
I think that risk management is actually done, but it is done in the
instinctive way: we don't expect much flaws from compilers or libraries,
so we pretend they do not exists. It works to an extent, but not a
well-scaleable approach. For example you are okay with libraries
in operating system and hardware TOEs. But to certificate a database
engine, you should do it in _one_ hardware architecture and operating
system, which is nonsense, and utterly despises end users.
It would be better to look at risks, judge them, and leave some of them
considered by the scheme or even the end user. Just do it in a manner
understandable by a manager, which is hard, because managers might have
a slight notion of what an EAL means, but even CC "experts" used to
forget the fact that evaluation assurance is just evaluation assurance,
and have nothing to do with the actual security the product provides.
(There are schema sites which present EPL only mentioning the EAL the
product received. It is HORROR.)
I don't know how could one handle the communication side of adding
risk management measures to this mix.
Our inhouse schema made it easy for the managers as they very indirectly
decide on the PP _and_ the EAL in an early stage (they think they have
just categorized their data), and all they will be provided later is
the input to make risk management decisions, simplified down to their
level. But this approach does not seem to be useable for choosing
products which seems to be the main area of interest for CC today:(
GNU GPL: csak tiszta forrásból
Date Index |
Thread Index |
Problems or questions? Contact firstname.lastname@example.org