Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Question:Show how to implement the stingy algorithm for Horn formula satisfiability (Section 5.3) in time that is linear in the length of the formula (the number of occurrences of literals in it). (Hint: Use a directed graph, with one node per variable, to represent the implications.)

Short Answer

Expert verified

Answer:This challenge of determining whether such a combination of conceptual Horn sentences seems to be satisfiable or not is known as Horn-satisfiability. The indicated literals are the simplest representation of something like the Horn formula.

Step by step solution

01

Build direct bipartiate(graph):

The entire process runs in linear time since each literal is only marked true once (with respect to the number of literals, because of the graph building).

Build a directed(bipartiate) graph, connecting the Horn clauses to their respective positive literal(if they have one), and connecting the literals to all chauses where they appear negated.

02

 Calculation of Propagate for programme functionality:

[UPDATE] Do the same for each clause c that does not contain a negating literal. {

assuming c doesn't have a significant literal STOPunsatisfactory,

disseminate the otherwise (c)

}

eventually, this unit programme functionality

propagate(c) :-

suppose c(positive )'s literal hasn't been certified true yet {

mark p true

considering all(p,c') in the graph's Edge set {

remove p from c'

Disseminate if c' contains nothing more negated literals (c')
}

}

Percussion seems to be the task of deciding whether a given mixture of theoretical Horn sentences appears to really be satisfiable or not. The literals shown are the most basic representations of things like with the Horn formula.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Show that for any integer n that is a power of 2 , there is an instance of the set cover problem (Section 5.4) with the following properties:

  1. There are n elements in the base set.
  2. The optimal cover uses just two sets.
  3. The greedy algorithm picks at least log n sets.

Thus the approximation ratio we derived in the chapter is tight.

In this problem, we will develop a new algorithm for finding minimum spanning trees. It is based upon the following property:

Pick any cycle in the graph, and let e be the heaviest edge in that cycle. Then there is a minimum spanning tree that does not contain e.

(a) Prove this property carefully.

(b) Here is the new MST algorithm. The input is some undirected graph G=(V,E) (in adjacency list format) with edge weights {we}.sort the edges according to their weights for each edge eE, in decreasing order of we:

if e is part of a cycle of G:

G = G - e (that is, remove e from G )

return G , Prove that this algorithm is correct.

(c) On each iteration, the algorithm must check whether there is a cycle containing a specific edge . Give a linear-time algorithm for this task, and justify its correctness.

(d) What is the overall time taken by this algorithm, in terms of |E|? Explain your answer.

Suppose you are given a weighted graph G=(V,E) with a distinguished vertex s and where all edge weights are positive and distinct. Is it possible for a tree of shortest paths from s and a minimum spanning tree in G to not share any edges? If so, give an example. If not, give a reason.

The basic intuition behind Huffman’s algorithm, that frequent blocks should have short encodings and infrequent blocks should have long encodings, is also at work in English, where typical words like I, you, is, and, to, from, and so on are short, and rarely used words like velociraptor are longer.

However, words like fire!, help!, and run! are short not because they are frequent, but perhaps because time is precious in situations where they are used.

To make things theoretical, suppose we have a file composed of m different words, with frequencies f1,...,fm. Suppose also that for the ithword, the cost per bit of encoding is ci. Thus, if we find a prefix-free code where the ithword has a codeword of length Ii, then the total cost of the encoding will be localid="1659078764835" fi·ci·li.

Show how to modify Huffman’s algorithm to find the prefix-free encoding of minimum total cost.

Ternary A server has customers waiting to be served. The service time required by eachcustomer is known in advance: it is ciminutes for customer i. So if, for example, the customers are served in order of increasing i , then the ithcustomer has to wait Pij=1tjminutes. We wish to minimize the total waiting time.

T=Xni=1(time spent waiting by customer ).

Give an efficient algorithm for computing the optimal order in which to process the customers.

See all solutions

Recommended explanations on Computer Science Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free