• 热门标签

当前位置: 主页 > 航空资料 > 国外资料 >

时间:2010-09-06 01:00来源:蓝天飞行翻译 作者:admin
曝光台 注意防骗 网曝天猫店富美金盛家居专营店坑蒙拐骗欺诈消费者

the normalization operation. The execution times
were catastrophic. A simple function independently manipulating
20 pointer variables within a loop took more than
15 minutes to analyze. The execution time did not change
at all when we tried Johnson’s algorithm.
After a careful inspection of the results it appeared that
the system of inequalities was always dense, i.e. all variables
were related. Therefore the cubic worst case execution time
was always attained. The reason was to be found in the
way simple range constraints of the form a ≤ x ≤ b are
represented. A DBM always contains a dummy zero variable
Z which has the value 0. Range constraints are translated
into constraints of the form a ≤ x − Z ≤ b. Therefore all
variables introduced in a DBM during the analysis become
implicitly related as soon as a range constraint is involved, in
other terms always. Thus completely independent variables
become related from the moment they receive a constant
(during initialization for example). This was a surprising
and disappointing result.
Our response to this situation was to explicitly pack computationally
dependent variables together, so that the analyzer
works on a collection of smaller DBMs. A similar situation
has been independently reported in [3]. In that work
the authors pack variables in small groups using a syntactic
criterion (all variables that appear within a same statement).
In our case, such a simple criterion does not work.
Pointer variables and loop counters can become related in
a nontrivial way via the sliding window representation. We
could not even use a dependency analysis because the application
of the slide operation depends on the range of uk
which can only be known during the fixpoint iteration. Any
dependency analysis performed beforehand would relate all
variables of the sliding windows which would still lead to a
high workload.
Our solution consisted of dynamically computing the dependency
relation between metavariables during the execution
of the analysis. We start with all metavariables being
unrelated and we incrementally merge the DBMs whenever
two of their variables become related by an operation of the
program. We also merge the associated zero variables. We
should also take care of implicit dependencies, i.e. the invisible
dependencies between variables which are modified
within a loop. If we do not consider these relations we lose
all relations between array indices and loop counters for example.
Therefore we first perform a rapid analysis of every
loop in order to check the variables that can be modified in
the body and we explicitly relate them before analyzing the
loop. We are then able to infer all invariants that can be
expressed with our abstraction. The function that took 15
minutes with the classic DBM domain could now be analyzed
in about 10 seconds.
The domain of adaptive DBMs that we have constructed
in that way is an order of magnitude of complexity beyond
the original one. Fortunately it can be simply described
as an instance of a cofibered domain [27, 28]. Cofibered
domains were initially introduced to construct complex domains
for pointer analysis. They enable the manipulation
of dependent abstract domains, i.e. families of abstract domains
indexed by the elements of a lattice. The domain of
adaptive DBMs is exactly a cofibered domain: the indexing
lattice is the set of all partitionings of the set of variables
ordered by the refinement relation, and the abstract domain
associated to one partitioning of the variables is the product
of the family of DBM domains based upon each set in
the partitioning. We measured that the average size of a
partition of correlated variables was five elements. It would
actually be an interesting experiment to use convex polyhedra
instead of DBMs in the cofibered domain, since five is a
tractable dimension for polyhedra, and compare the gain in
precision.
3.3 Interprocedural Propagation
Function pointers are widely used in embedded programs
for efficiency reasons. There are plenty of them in codes of
the MPF family. We realized that a simple control-flow analysis
based on Steensgaard’s algorithm [24] was sufficient to
solve exactly almost all computed calls. As a matter of fact,
recent experimental evaluations showed that simple pointer
analyses were sufficient to resolve computed calls in most
applications [19]. We perform this simple control-flow analysis
at the bootstrap prior to launching the interprocedural
propagation phases. Having all computed calls resolved at
 
中国航空网 www.aero.cn
航空翻译 www.aviation.cn
本文链接地址:航空资料36(63)