RE: FPT_SEP in an Application TOE
- Subject: RE: FPT_SEP in an Application TOE
- From: "Arnold, James L. Jr." <JAMES.L.ARNOLD.JR@saic.com>
- Date: Mon, 11 Oct 2004 08:22:02 -0400
- Content-Type: text/plain
> I've missed the last couple of weeks of heated discussion on
> this mailing list and got it all at once. It seems that there
> are several active threads that all revolve around what is
> clearly one of the weak points of the CC:
> composition. This has also been a popular topic at the Berlin
> ICCC. I believe the presentations given at the conference are
> intended to be available on the Web in the near future.
> The issue boils down to the question: how do you evaluate a
> component TOE and claim security properties for it even when
> it has dependencies on its environment?
The evaluation of such products is not new as there are numerous evaluated
products that depend on their environments - either via dependency from a
TOE SFR to an IT environment SFR and in some cases even less directly via
TSS statements and assumptions.
I started this chain simply because of an apparent recent change in the
handling of this situation. That change involves suggesting either to
explicit restate a TOE SFR to explain the support provided by the IT
environment or to drop the claim. In my case, as I have presented a few
times in this chain already, the underlying OS is bound by assumptions and
IT environment requirements to be restricted to allow only the application
TOE to run and to offer own TOE services externally (i.e., via the network).
Given these ST imposed limitations, the claim that the TOE protects its
audit data was denied, even though the TOE offers controlled interfaces to
access audit records, because the underlying OS might be somehow be
corrupted to allow tampering with the TOE audit trail (not that FPT_SEP was
already dropped by that point, which was denied for essentially the same
reason). I have yet to see an reasonable rationale in light of the fact that
restricting access to the audit trail is not substantially different from
restricting access to TOE management functions (or data) and I am quite sure
that numerous products have been evaluated in the past with similar
circumstances with no difficulty.
> This discussion comes at a time when new versions of the CC
> are being crafted. A lot of this revision is being done
> behind closed doors, but some hints of what is to come can be
> gleaned from ASE/APE Trial Use v2.4, presentations at the
> ICCC and various publications. Most of the work is focused
> towards getting CCv3.0 out next year, but it's already clear
> that some things will have to wait for later versions.
> Unfortunately, it seems that an underlying framework for
> composition is one of those things that'll have to wait.
I am hoping that some of the v3 authors take this discussion into account,
but v3 isn't likely to be used for another year or so. Note that I'm
somewhat afraid of the 'behind closed doors' thing since I think some of the
people working on the rewrite are creating the problems in using the current
version of the CC, and it might be that work itself that is instigating the
recent changes I am seeing.
> I believe that the single most important element behind the
> sorry state of security engineering practice we see today,
> after over thirty years of research and experience, has been
> the lack of a generally accepted set of paradigms and
> terminology. It seems to me that the CC, having been adopted
> internationally, is the best bet we have towards getting
> there. We all know we haven't reached the promised land yet -
> even something as basic as the concept of 'threat' has
> changed dramatically between CC versions 2.2 to 2.4.
> I don't see this as shameful, but as a sign that we are
> finally making real progress.
> I tend to agree with Hal Forsberg that major changes in the
> way we apply the standards should be addressed in future
> versions. The CC are anything but static; theory becomes
> criteria in noticeable leaps and bounds. This means that
> interpretations necessarily deal with the current standard,
> v2.2. IMHO, we should tend to leniency when applying new
> understanding to existing standards, especially when it comes
> to puzzling out ambiguous or incoherent wording in a
> standard, which we all know is in a state of evolution.
I think I have to disagree in that what I started this chain with was a
question about a recent change in direction. The subsequent discussion has
turned perhaps a bit more esoteric, but I think what I am suggesting is more
or less an admission of much of the historical practice. The problem is that
while claims have been met by TOEs with obvious environment dependencies,
the powers that be have insisted that the TOE meets the requirements 100%. I
have tried to hold the same position all along, but I find that I am
constantly faced with new and different positions from my own scheme.
Regardless, I am hopeful that the next version of the CC will explicitly and
clearly address topics such as this, but at the same time I hope that the
next version doesn't deny valid evaluations for the sake of some minority
> When it comes to discussing theory I would argue that it is
> more productive to consider the new frameworks rather than
> the old. Specifically, I understand some relevant changes to include:
> 1. FPT_SEP and RVM are to be taken out of Part 2 (see NIAP
> I-0382) and considered architectural assurances - this means
> that you won't be able NOT to claim that the TSF is protected
> from bypass, tampering, and/or interference (as some STs have done).
While most TOEs should address the concepts of tamper and bypass, I hope the
next version doesn't actually require it. There might be cases where it is
important to determine only that a mechanism works correctly, such as a
plug-in security module. If these architectural claims freely allow the
environment to address the concern, then perhaps it would be a problem, but
I envision perfectly useful and meaningful cases where the TOE itself plays
no (or very little, such as just working properly) role in those functions.
> 2. The TSP is now clearly defined to be expressed by the set of SFRs.
> 3. SFR implementation analysis is separate from the
> architectural analysis (SEP and RVM). FSP is more explicitly
As indicated above, I hope we don't deny TOEs that simply offer a function
perhaps to be utilized in some composite.
> 4. Security requirements for the operational environment are
> no longer to be specified in the ST.
So how are dependencies to be addressed? There are numerous cases, but I am
most concerned with the case where the full (per SFRs) functionality appears
to be addressed at the TSFI, but the TOE relies on other components for some
measure of support.
> 5. Assumptions on the environment are bad for composition and
> I suspect that they will be discarded in a future CC version;
> objectives for the environment should be defined as
> countering threats, rather than upholding assumptions.
So basically we are changing assumptions to objectives. I am uncertain why
assumptions about the environment are bad, since I tend to think that the
concept of assumptions is known to most people (implying that they must be
true). Objectives on the environment I tend to think may not be so clearly
understood, but can equally specify things that must be true about the
environment. At the end of the day, the ST should be written to clearly
present things to someone who is not necessarily CC literate.
> 6. While we are arguing about what was meant by the specific
> wording of this or that CCv2.2 SFR (e.g. FIA_UAU.1), D.J. Out
> has indicated on this list that a Part 2 rewrite is in
> progress for CCv3.0. Perhaps we should hold off any
> interpretations recommending SFR changes until we get to see
> a draft and offer appropriate feedback.
Regardless of the terminology, the basic concepts will remain.
> The issue of the allocation of security requirements to the
> environment has been debated often on this list (e.g. see
> thread on PD-0091 from January 2004). Several PDs (e.g. 0004,
> 0046, 0053, 0099) require the underlying platform to be
> within the boundaries of the TOE, i.e. mandate 'monolithic'
> evaluations. This evidently feels unsatisfying to many people
> (myself included).
While some of the PDs do not seem justified, the scheme has taken the
position that the PP author gets to decide. Unfortunately, I don't know if
anything in the next version of the CC can take the decision of PP
conformance out of the author's hands. I would certainly prefer that PPs
need to be self-specifying and there should be not reason to consult an
author - this most likely means much more extensive PP evaluation
requirements to make sure that the PP is right in the first place.
> Since Franklin Haskell has shifted the discussion from Scheme
> policy to a discussion of composition theory in the context
> of the CC, I suggest we re-examine WHY we're where we are.
> Why has the evaluation-in-parts concept, as expressed by the
> TNI and TDI, not translated through into the CC?
I don't think they were forgotten when the CC was written. After all the CC
specifically supports the notion of IT environments and dependencies
thereto. However, in making decisions and interpretations over time, it
seems as though the U.S. scheme has subsequently forgotten those earlier
> The CC defines 'TOE' (somewhat circularly) as what is being
> subject to evaluation. The environment is everything else -
> NOT subject to evaluation.
> Intuition and common sense dictate that the TOE be what is
> delivered to the consumer organization.
While one might argue that the TOE is what is delivered to the customer, and
that would certainly serve the customer best, the thing delivered to the
customer may have parts that were not subject to evaluation (for good
reasons). A good example is an operating system where the kernel, trusted
services, and admin (and security relevant user) tools have been evaluated,
but the host of apps included with the OS have not been evaluated because
they are outside the security perimeter. So, are those apps part of the TOE?
I think not, but the overall solution has been evaluated with the limitation
that a bunch of untrusted apps are included and should not be used with any
measure of privilege...
> * Where the TOE has NO dependencies on the IT environment
> (standalone TOE), the issue is moot.
As indicated above, it is not moot.
> * Where the TOE depends on the IT environment but has no
> security-dependency on the IT environment (i.e. the TSF is
> self-contained), there are no security objectives for the IT
> environment, and again, there is no issue with a partial TOE.
> An example could be a PKI that interacts with a directory
> server, but does not allocate any security objectives to the
> directory that are required to support the PKI TSP (the
> directory will typically have its own TSP). Of course, the
> interfaces with the IT environment should be described so
> that the evaluator can determine that they are not security-relevant.
> * Therefore, the issue pertains to a TOE whose TSF has
> security-dependencies on the IT environment.
But what if the security dependencies amount to 'restrict physical access to
the environment, configure the underlying environment to run only the TOE,
do not configure any untrusted user accounts, do not run untrusted software,
and use only an environment that you trust not to tamper with the TOE
itself'. Dependencies such as these would be common for an application TOE
such as a certificate server, IDS product, firewall, database server, etc.
Are these security objectives?
> The CC defines 'TSF' as all that must be relied upon for the
> correct enforcement of the TSP. For the sake of this
> argument, I shall define TSF* as the security-relevant parts
> of the IT environment.
Actually, it defines the TSF as all of the 'TOE' that must be relied
upon...this clearly leaves the IT environment and its support out of the
> The U.S. Scheme has "solved" the problem by eliminating it:
> the boundary of the TOE is expanded until TSF* is empty. This
> almost always means that hardware (memory, CPU) must be
> included in the TOE.
I'm not sure they have really solved anything. They allow some dependencies
on the IT environment and not others, seemingly arbitrarily. I agree that
they certainly prefer evaluations with hardware, but even they realize that
is usually not practical.
> In the recent ICCC, I presented a proposed framework for
> composition that takes another tack: use common sense in
> defining the boundaries of the TOE, consider the IT
> environment as a subsystem, and include it INSIDE the
> boundaries of the evaluation. Invent a new name (we use Focus
> of Evaluation
> - FOE) for (TSF+TSF*).
Note that I think the current CC effectively treats the IT environment as a
subsystem insofar as the specification of the IT environment interface is in
the high-level design. The current CC would have an evaluator analyze the
specification of IT environment interface, but not beyond that
> If the evaluation sponsor does not wish to or cannot provide
> appropriate evidence for the TSF* at a level commensurate
> with the claimed EAL for the TOE, he must provide a balanced
> assurance argument why he believes there is ample grounds for
> confidence that the underlying IT environment (or any
> subsystem, for that matter) will support the TOE's
> objectives. This could be based on the fact that it has
> already been evaluated to meet appropriate requirements, or
> on arguments such as "the hardware is only
> security-supporting, it is a massed produced chip, the vendor
> has a huge stake in making sure that the chip performs
> correctly because recalls are so costly, and besides, the
> chip vendor is never going to give us the evidence we need so
> we might as well trust him as we don't really have a choice
> in the matter."
> In any case, the TOE must be tested in conjunction with its
> IT environment.
I had always believe that this is where assumptions (about the environment)
and SFRs for the environment came in. One would evaluate the IT environment
interface specification and associated assumptions and SFRs to make sure
they appears to make sense and the TOE would necessarily be tested in an
environment to seemed to fulfill those things. Presumably, the underlying
product might already have been , might be in the progress of, or might
subsequently be evaluated in a manner that would support those assumptions
(which might be additional, procedural restrictions,. For example) and SFRs.
> At that same conference, Ken Elliott presented current work
> on updating the ADV class. He proposes a leveled approach,
> which for any given subsystem requires (rough translation):
> EAL1 - nothing
> EAL2 - identification of the TSF, architecture,
> security-enforcing behavior of security-enforcing components,
> support for component designation
> EAL3 - more detail for SE, description of SS behavior
> EAL4 - modules, implementation representation
> EAL5 - SNI, semi-formal presentation, INT ..
This represents one of my fears about the next CC version, we seem to be
keeping the labels but changing the meaning. If anyone with a pen is reading
this, please consider changing the labels if the meanings are to be changed.
Rough mappings such as when the CC EALs were originally published are OK,
but it is not good to change the definition of an EAL and to keep using it.
How about New Evaluation Assurance Levels or EAL2K?
> Taking this to the IT environment, I believe that you might
> be able to get up to EAL 3 with standard documentation that
> usually accompanies operating systems, ICs, and the like.
Regardless, it should be possible to buy an evaluated operating system and
to install a trusted application (i.e., TOE) with no information beyond that
required in the evaluated config, admin, and user guides. So long as the
'evaluated configuration' is not violated, how will that situation be
accomodated? But, of course, the question might remain as to who protects
audit records, configuration data, and the TOE itself. From my perspective,
so long as the OS offers the necessary mechanisms and the TOE uses them
properly, I shouldn't have to have a lot of information about the underlying
OS at any assurance level even to speak to composed mechanisms.
However, I don't want evaluations to be unecessarily tied to the schedules
of other evaluations. As such, it should remain (as it is possible today) to
specify things about the environment such that if they are true (e.g., it
has been evaluated against a particular OS PP and offers specific mechanisms
such as file access control, memory protection, and process protection) the
overall solution is acceptable. Again, I shouldn't have to know a lot about
the underlying implementation, I need only know how to use the applicable
mechanisms properly. As to when the underlying product is evaluated, that is
a tough question - if already evaluated no problem. If currently being
evaluated, the situation is risky since the product might change. If there
is no evaluation in sight, even more risky since the composite may never be
completely evaluated. In each case, the underlying platform would certainly
need to be identified and used for testing purposes. Restricting an
application TOE to an evaluated platform is probably not too severe a
restriction, especially if assurance continuity could be used to migrate to
newer versions of the platform as they become available and evaluated.
Date Index |
Thread Index |
Problems or questions? Contact email@example.com