If you wish to place a tax exempt order please contact us. Add to cart. Sales tax will be calculated at check-out. Free Global Shipping. This volume offers up-to-date original research papers by leading experts in the area.
Author : B. Author : Thomas W. A larger class of cryptographic Boolean functions via a study of the Maiorana—McFarland construction. In: Finite fields with applications to coding theory , cryptography and related areas, Oaxaca, MacWilliams and N. Logachev, A. Sal'nikov, and V. However, we have already observed that this may lead to an exponential explosion in the size of the problem see Example 1. As a matter of fact, Example 1.
We say that S1 and S2 are equivalent if the following two conditions hold:. So, when S1 and S2 are equivalent, the solution set of S1 is the projection on B n of the solution set of S2. In particular, if S2 only involves the X-variables, then S1 and S2 are equivalent if and only if they have the same solution set.
We are now ready for the main result of this section Tseitin []; see also [78] for a broader discussion and for extensions of this result to first-order predicate logic. Every Boolean system can be reduced in linear time to an equivalent DNF equation. First, Theorem 2. Then, apply the procedure Expand described in Section 1.
Actually, we do not need the full power of the procedure Expand in order to establish Theorem 2. The next result underlines the special role played by DNF equations of degree 3. It can be seen as a strengthening of the second half of Theorem 2. Consider a DNF equation of the form 2. Let y be an additional Boolean variable, different from x1 , x2 ,.
Thus, the equations are equivalent. Thus, repeatedly applying this reduction eventually yields a DNF equation of degree 3. We leave to the reader a more complete analysis of the complexity of this procedure.
Relying on Theorem 2. Hence, in some cases, this transformation may artificially increase the difficulty of the problem at hand. Since some Boolean equations naturally arise in non-DNF form e. However, a more efficient approach can be used here.
If it is consistent, then we can stop right away. It will also be easy to see that some of the equation-solving techniques presented in the following sections e. Other techniques have been generalized in a more sophisticated way with the same goal in mind, for example, the consensus technique see Thayse [], Van Gelder [] or local search heuristics see Stachniak []. It is worthwhile to briefly clarify this issue before proceeding. Any algorithm for the solution of Boolean equations should be able, as a minimal requirement, to give an answer to this decision problem.
Now, any algorithm that tests the consistency of an equation can also be used, in principle, to compute a solution when there is one. To understand this, consider any such consistency-testing algorithm, say A.
If the answer is No, then we can stop. If the answer is No, then we know that xn must be 1 in all solutions, and, accordingly, we fix xn to 1 in the equation. Fortunately, this roundabout way of computing solutions will usually not prove necessary. Indeed, it is difficult to imagine an algorithm that would simply test whether an equation is consistent that would not also, implicitly or explicitly, find a solution of the equation when there is one.
This is, of course, a formidable requirement, since a Boolean equation may well have an exponential number of solutions. We discuss various ways of handling this problem, either explicitly, in Section 2. We may also be interested in counting the number of solutions of the equation.
We have already briefly mentioned this question in Sections 1. We return to it in Section 2. We may want to compute the optimal solution of the given equation according to a variety of numerical criteria.
Such formulations bring us into the realm of integer programming. They have already been evoked in Section 1. Finally, even when the equation is inconsistent, we may want to compute a point that comes as close as possible to satisfying the equation.
For instance, the famous maximum satisfiability problem or Max Sat problem is of this nature. This problem and some of its variants have been thoroughly investigated in the computational complexity literature. We discuss them in Section 2. For now, let us turn to the fundamental task of testing the consistency of a Boolean equation. There is a huge field to cover, and we shall primarily concen- trate on exact Boolean approaches, as opposed, for instance, to heuristics and to numerical methods.
For additional information, we refer to the books [, ], to the collections of papers [, , ], to the surveys [, , ], and so on. Yet, in spite or because of their simplicity, they have established themselves as very efficient, reliable, and versatile methods. Therefore, they deserve special attention in this chapter.
They also provide a general framework in which many useful algorithmic ideas can easily be explained and implemented. The starting point of most branching procedures is the following obvious observation:. We are now going to describe the basic scheme of such a procedure, first informally, and then more rigorously. We restrict ourselves to a depth-first search version of the procedure, partly for the sake of simplicity, and also because many efficient implementations fall under this category the reader will easily figure out what a more general branching scheme may look like.
We simply assume that this subroutine always returns one of three possible outputs:. Figure 2. The correctness of the procedure directly follows from Theorem 2. Procedure Branch. Of course, Branch cannot really be called an algorithm until we specify the rules applied to selecting the branching variable, as well as the specific fea- tures of the subroutine Preprocess. In practice, as demonstrated, for instance, in [, , ], the efficiency of Branch hinges critically on these factors as well as on the strategy used to explore the search tree; see Section 2.
We now proceed with a discussion of these topics. In particular, branching rules and preprocessing operations have been mostly investigated for DNF equations, and from now on, we restrict our attention to such equations. When branching is necessary, the branching variable can be chosen according to various strategies. Some typical suggestions are listed hereunder. Jeroslow and Wang [] and Harche, Hooker, and Thomp- son [] obtained good computational results with this branching rule, but Dubois et al.
The performance of the branching rule has been investigated in depth by Hooker and Vinay [], who also challenge its rationale and propose more efficient alternatives. Intuitively, this type of rule favors variables that not only appear frequently in short terms but also tends to pick variables for which the subtrees created after branching are roughly balanced this provides the motivation for the last term in 2. Other practical branching rules are discussed in [58, , , , , , , ], and so on.
Dubois et al. This type of strategy appears, for instance, in publications by Brayton et al. Closely related concepts have recently been reex- amined by Williams, Gomes, and Selman [] under the name backdoor sets; see also [, , , ] and the discussion of relaxation schemes in Section 2.
The branching rules described earlier may lead to ties that can be broken either deterministically e. Implementations of sophisticated randomized branching rules are found, for instance, in Bayardo and Schrag [58] and Crawford and Auton []. Interestingly, Gomes et al. Departing from the basic algorithm Branch, some authors have suggested branching on terms of the current DNF rather than on its variables.
In their computational experiments, Gallo and Urbani [] and Bruni and Sassano [] found this rule to perform well. We successively handle rewriting rules, the Davis-Putnam rules, general heuristics, and relaxation schemes. The consensus procedure see Section 2. The Davis-Putnam rules identify various special circumstances under which a variable xi can be fixed to a specific value without affecting the consistency of the equation.
The rules fall into two categories: unit literal rules sometimes called unit clause rules, unit deduction rules, forward chaining rules, etc.
The unit literal rules are obviously valid; that is, the equation obtained after applying the rules is consistent if and only if the original equation is consistent. Within branching algorithms, they are usually applied in an iterative fashion until their premises are no longer satisfied. In the artificial intelligence literature, this procedure sometimes goes by the name of unit resolution, clausal chaining, or Boolean constraint propagation BCP see, e.
The unit literal rules can be implemented to run in linear time and are com- putationally efficient. It is worth noting that they are somehow redundant with most of the branching rules described in the previous subsection, in the sense that these branching rules tend to select a variable appearing in a term of degree 1 when such a term exists since the branching rules often give priority to variables appearing in short terms. Thus, many branching rules can be seen as automatically enforcing the unit literal rules when they are applicable, and as generalizing these rules to terms of higher degree otherwise.
Separately handling the unit literal rules, however, usually allows for more efficient implementations. Let us now turn to the monotone literal rules. From a practical viewpoint, they can be implemented to run in linear time but seem to have only a marginal effect on the performance of branching procedures. Generalizations of these rules have been investigated in []. Heuristics Any heuristic approach to consistency testing can be used within the branching framework. In the latter case, Preprocess simply returns the original equation.
Jaumard, Stan, and Desrosiers [] similarly rely on a tabu search heuristic at every node of the branching tree. Relaxation schemes An interesting approach to preprocessing has been initiated by Gallo and Urbani [] who also credit Minoux [unpublished] with a similar idea and exploited by several other researchers in various frameworks.
This approach makes use of a basic ingredient of enumerative algorithms: the notion of relaxation of a problem. This idea is related to the notion of control set introduced in Section 2. Crama, Ekin, and Hammer [] have investigated the computational complexity of several versions of this problem. Horn equations are precisely those DNF equations in which each term contains at most one complemented variable recall Definition 1. As we will see in Chapter 6, Horn equations can be solved in linear time essentially, by repeated application of the unit literal rules.
More elaborate schemes are discussed in Gallo and Urbani [] or Pretolani []. Other authors have similarly proposed to relax the given DNF equation to a quadratic equation quadratic equations, like Horn equations, are easily solved in linear time; see Chapter 5. As Larrabee observed [], one may expect these approaches to perform particularly well when the equation contains a relatively high number of quadratic terms, as is the case with the equations arising from stuck-at fault detection in combinational circuits see Application 2.
Finally, we note that the decomposition techniques described by Truemper [] share some similarities with relaxation schemes. They rely on the following result. It should be noted, however, that contrary to Theorem 2.
Equation 2. By successive elimination of a subset of variables, a necessary and sufficient condition for the consistency of 2. More specifically, successive elimination of all variables of the equation 2. Before we make this more precise, however, we would like to address the following question: Suppose that the equation 2. The next result provides a constructive answer to this question.
The validity of this statement can be verified by direct substitution. But the following proof provides more insight into the nature of the elimination technique. Let us now illustrate the use of Theorems 2. By Theorem 2. Applying once again Theorem 2. Using iteratively The- orem 2. The correctness of the procedure is an immediate consequence of Theorems 2. It should be noted, however, that Eliminate can be implemented in a variety of ways.
More precisely, the meaning of the assignment. Also, there is no reason to stick to the original ordering x1 ,. Rather, we may want to decide at each step, in a dynamic fashion, what variable to eliminate next. The answer to these questions may determine the efficiency of Eliminate to a large extent, and we now proceed to discuss them briefly. Successive elimination then amounts to the complete enumeration of all points of Bn , and the necessary and sufficient condition for consistency, viz.
By contrast, transforming the expression 2. The expression 2. This DNF can be further simplified by deleting any term that is identically 0 or is absorbed by another term.
Since a DNF is identically zero if and only if it has no terms, this approach sometimes allows us to detect consistency early in the elimination procedure, thus reducing the number of iterations required by Eliminate and speeding up termination. Consider the equation. We now turn to a brief discussion of the elimination ordering.
As noted earlier, there is no compelling reason to eliminate the variables in the order xn ,. We may even want to determine dynamically that is, on the run which variable xi to eliminate next. In some situations, an obvious choice can be made for this next variable. It is easy to recognize in this description an alternative statement of the Davis-Putnam rules see Section 2.
Variable x5 only appears in uncomplemented form in this DNF. By the monotone literal rule, we can set x5 to 0, thus reducing the original problem to the equation solved in Example 2. The additional rules proposed by these authors consist in maintaining the DNF format throughout the procedure and in computing dynamically an effective variable elimination ordering.
Even with these refinements, however, the main computational hurdle of the elimination method remains: Namely, the number of terms in the equation tends to explode in the initial phases of the procedure, before it eventually decreases with the number of variables. As a result, computer implementations of elimination proce- dures rapidly face memory space problems, similar in nature to those encountered by other dynamic programming algorithms.
In effect, these problems are often serious enough to prohibit the solution of equations involving many variables. This difficulty was first noticed by Davis, Logemann, and Loveland [] and led them to replace the original form of the Davis-Putnam algorithm by a branching procedure of the type discussed in Section 2.
We will see in Sections 2. It was orig- inally designed as a method generating that is, listing all prime implicants of a Boolean function given in DNF and was repeatedly discovered in this form by sev- eral independent researchers; see Blake [99], Samson and Mills [], Quine [], as well as Chapter 3.
Brown [] gives a interesting historical account of this line of research. As a solution method for CNF equations, the consensus method mostly owes its fame to Robinson []. In his seminal paper, Robinson introduced an infer- ence principle which he calls the resolution principle for first-order logic. The resolution principle subsequently became the cornerstone of many algorithmic techniques used by automated reasoning systems see, e.
When specialized to propositional equations and translated from the CNF for- mat favored by Robinson into the equivalent DNF framework adopted here, the resolution method becomes essentially identical to the consensus method, and it immediately follows from earlier works that resolution provides a correct solution procedure for Boolean equations although Robinson [] used a direct, ad hoc argument to establish this important result.
In this section, we explain how a consensus-based procedure can be used to solve DNF Boolean equations. A general version of the consensus method, allow- ing the enumeration of all prime implicants of DNF expressions, is discussed more extensively in Chapter 3.
The essence of consensus procedures lies in the following observation. Note the similarity of this statement with the statement of Theorem 2.
If x C and x D are two elementary conjunctions such that CD is not identically 0, then we say that CD is the consensus of these two conjunctions, and we say that CD is derived from x C and x D by consensus on x. One interpretation of Theorem 2. This view of consensus derivation, as an operation producing new elemen- tary conjunctions from existing ones, leads to a natural extension of the previous concepts. The elementary conjunction C can be derived by consensus from a set S of elementary conjunctions if there exists a finite sequence C1 , C2 ,.
We are now ready to state the fundamental result that motivates the consideration of consensus derivation. As mentioned earlier, this theorem can be viewed as an immediate corollary of the results in Chapter 3 see Theorem 3. For the sake of completeness, we prove it here from first principles. A procedure for testing the consistency of DNF equations can now be stated as in Figure 2. The correctness of the procedure is an immediate corollary of Theorem 2.
From the terms x 1 x2 x 3 and x 1 x 2 , we can derive the consensus x 1 x 3. This new term together with x 1 x3 yields the consensus x 1. Two features of the consensus procedure deserve further attention. First, Con- sensus does not produce a solution of the DNF equation when there is one. Second, Consensus is not completely defined, since we did not specify how the terms xi C and x i D are to be chosen in the while-loop. We now successively tackle these two points. Consider first the fact that Consensus only delivers a consistency verdict for DNF equations, but no solution.
This is, from a theoretical viewpoint, no serious problem. Indeed, as explained in Section 2. But the situation is actually even better here. Let us also notice, as a final remark on this topic, that the consensus procedure and its various extensions have been mostly used as equation-solving techniques within the field of automated theorem proving. On the other hand, what is valuable in this context is an explicit argument showing why a theorem is true i. A consen- sus derivation of inconsistency provides such an argument although sometimes insufficiently clear; see [, ] for a more detailed discussion.
We now take up the second issue mentioned above: How are the terms xi C and x i D to be selected in the while-loop of the consensus procedure? Note, for instance, that the consensus of xi and x i D is simply D, which absorbs x i D. Thus, unit consensus steps can be implemented without increasing the cardinality of the set S. If we restrict the pro- cedure Consensus to the use of unit consensus steps, then the procedure becomes extremely fast. But, unfortunately, it can fail to detect inconsistent equations.
Nev- ertheless, equation solving heuristics based on this approach are widely used in automated reasoning procedures. Other specialized forms of consensus used in automated reasoning are the so- called set of support strategy, linear consensus, input refutation, and so on. Some of these variants will be introduced in subsequent chapters e. We also refer to [, , ] and to the exercises at the end of this chapter for more information. This is in sharp contrast with the methods discussed in previous sections, which rely on a purely symbolic treatment of the variables.
The idea of identifying the Boolean symbols 0 and 1 with numbers and reducing problems of logic to optimization prob- lems goes back a long time see, among others, Fortet [, ], Hammer and Rudeanu [] ; the interest in such approaches has been revived in recent years. The first claim is just a restatement of Theorem 1.
Such approaches have been taken up and developed by several researchers, following, in particular, some early work by Jeroslow and his coworkers; see, for example, [98, ], Williams [, ] and Hooker [, , , ], and so on. The book by Chandru and Hooker [] covers these developments in great detail, so that we shall content ourselves with a brief survey of the basic ideas see also Hooker [] for a discussion of logic-based methods in optimization.
Let us see what this approach amounts to. How effective is this particular version of Branch? The next result is due to Blair, Jeroslow, and Lowe [98]. Statements a and b easily follow from these observations, by induction on the number of variables fixed by unit consensus. Statement c is a corollary of the previous ones. It follows from Theorem 2.
While this is true to some extent, computational experiments indicate that this approach is rather inefficient and that special-purpose heuristics tend to outperform this general- purpose LP-based approach see [98, ] and Section 2. The integer programming framework, however, also offers insights of a more theoretical nature into the solution of Boolean equations. Let us first recall some definitions from [, , ]. We denote by "x the smallest integer not smaller than x. We are now ready to apply these concepts to the solution of Boolean equations.
Take the sum of 2. Divide both sides of the resulting inequality by 2. Therefore, 2. The consensus cut 2. This observation raises hope that a cutting-plane approach to the solution of Boolean equations may be practically efficient see also Section 2.
This inefficient approach, however, can be accelerated in various ways. We refer to Chandru and Hooker [] for details and for additional theoretical developments, and to Chai and Kuehlmann [] or Manquinho and Marques-Silva [] for recent computational work along similar lines. Hooker [] presents further results about the integer programming approach to logic.
Another line of attack exploits the following observation: Theorem 2. Consider the DNF m. The equivalence of statements a and b is obvious. Any algorithm for nonlinear programming can be used to optimize f X over Bn see Chapter 13 and the survey []. Hammer, Rosenberg, and Rudeanu [, ], for instance, have proposed a variable elimination algorithm inspired from Eliminate for minimizing functions of the form 2. A stream- lined version and an efficient implementation of this algorithm are described by Crama, Hansen, and Jaumard [], who also observe that this algorithm is appli- cable to the solution of Boolean equations.
The algorithm described in [] relies on numerical bounding procedures to control to a certain extent the combinatorial explosion inherent to elimination procedures see Section 2. The coefficients ck are arbitrary in 2. Wah and Shang [] propose a discrete Lagrangian algorithm for minimizing 2.
Recently, several authors have experimented with semidefinite programming reformulations of Boolean equations based on extensions of Theorem 2. Gu [] combines various continuous global optimization algorithms with backtracking techniques to compute the minimum of 2. As a matter of fact, heuristic methods for equation solving have been used for a long time in the artificial intelligence literature see [, , ].
We have already mentioned, for instance, that unit consensus is sometimes viewed as providing such an incomplete method. Linear consensus see Exercise 9 in Section 2. Both unit consensus and linear consensus may prove either consistency or inconsistency but may sometimes terminate without any conclusion. By contrast, a more recent trend of research has turned to heuristics which are unable to prove inconsistency, and which simply concentrate on the quest for solutions of the equation.
These approaches are typically based on Theorem 2. Pioneering work along these lines includes the work of Gu [, ]; Selman, Levesque, and Mitchell []; Selman, Kautz, and Cohen [, ] a very similar scheme was implemented by Hansen and Jaumard [] for solving the maximum satisfiability problem. The decrease may be negative or 0.
GSAT was found to perform surprisingly well on a variety of experimental benchmark problems. It can be improved even further, however, if the variable to be switched is picked more carefully. Algorithms in the WalkSAT family [, ], for instance, select a term randomly among all terms of 2.
Variations on this theme have been explored by several researchers. Gu et al. Finally, we note that local search algorithms have also been proposed as exact solution methods for DNF equations, as in Dantsin et al.
In spite of its simplicity, the basic branching scheme enhanced by some additional features such as a smart branching rule or a tight relaxation has repeatedly proved to provide one of the most effective ways of solving DNF Boolean equations. It should be noted, however, that some of the most recent implementations depart in various ways from the basic scheme described in Section 2. For instance, in their Relsat algorithm, Bayardo and Schrag [58] have incor- porated look-back strategies inspired from the constraint satisfaction literature: Relsat is no longer restricted to performing a depth-first traversal of the search tree, but is allowed to backtrack in more intelligent ways.
For this purpose, the authors, at every node of the search tree, rely on information which is derived from past branchings by implicitly applying consensus operations on carefully selected pairs of terms. Thus, their approach provides an interesting, and extremely effective link between branching-based and consensus-based techniques. In another paper, Gomes et al. In RRR, if the branching procedure does not stop after a small number of backtracks, then the run is ter- minated and restarted from the root since the branching rule is randomized, two successive runs of the procedure usually behave differently.
In particular, the authors show that RRR further improves the performance of efficient algorithms like Relsat [58] and Satz []. The developments concerning the fast solution of Boolean equations have certainly not come to a halt yet and, in years to come, we should still witness much progress in the solution of this venerable problem.
We refer the reader to the surveys [, , , , ] and to the book [] for additional details and references. Rather than viewing these procedures as precisely defined algorithms, we look at them as broad algorithmic frameworks, or proof systems.
For instance, we do not want to specify how the next branching variable or how the next consensus pair is selected.
Moreover, we only consider the simplest versions of the procedures, without any fancy preprocessing or additional heuristics. Cutting-plane procedures are stronger than consensus pro- cedures, which are stronger than both branching and variable elimination procedures.
The relative strength of consensus and cutting-plane procedures was exam- ined in Section 2. We noted at the end of Section 2. Thus, it only remains to establish that the consensus procedure is stronger than branching.
Results showed that the statistical properties of the dynamics, such as complexity, are determined by the lower scale upward causation , while the precise dynamics of the networks are determined by the higher scale downward causation.
That is, multilayer BN have the potential of benefiting from noise more than random BN. This suggests that a multilayer structure could be useful in different engineering systems. His work includes an illustration of theory through a model for apoptosis, taken from Tournier and Chaves [ 4 ].
The authors conclude that consumer community network is the important place that reflects product experiences and facilitates product innovation in future. Manufacturers can promote improvement and innovation of products by exploring effective information on the consumer community network, thus improving the experience level of consumers.
On this basis, three strategies to improve information mining in consumer community networks are proposed. Antifragility occurs when a system benefits from perturbations [ 5 ]. Exploring random BN with this measure, the authors found that ordered dynamics are the most antifragile. Also, seven biological BNs exhibited antifragility. In addition, they introduce a new kind of All-Color Problem, so called k-Random Weak-All-Colors Problem , which is interesting due to its applications to both combinatorial number theory and CA theory.
The editors declare that they have no conflicts of interest regarding the publication of this special issue. The editors thank all the authors of the papers submitted to this special issue and all the reviewers for their time and effort in conducting the corresponding review processes. Henning S. Valverde Henning S. Mortveit Carlos Gershenson Yongtang Shi.
0コメント