Approximation algorithms for precedenceconstrained scheduling problems on parallel machines that run at dierent speeds Fabian A Chudak David B Shmoysy Abstract We p r e s e n t new approximation algorithms for the problem of scheduling precedenceconstrained jobs on parallel machines that are uniformly related That is there are n jobs and m machines each jo b j requires pj units of processing and is to be processed on one machine without in terruption if it is assigned to machine i w h i c h runs at a given speed si it takes pj si time units There also is a partial order on the jobs where jk implies that job k may not start processing until job j has been completed We shall consider two objective functions P Cmax m ax j Cj w here Cj denotes the completion time of job j and j wj Cj where wj is a weight that is givenforeach job j p For the rst objective the best previously known result is an Om approximation al gorithm which w asshown by Ja e m ore than yearsago We shall give an O log m approximation algorithm We shall also show h o w to extend this result to obtain an O log m approximation algorithm for the second objective albeit with a somewhat larger constant These results also extend to settings in which e a c h job j has a release date rj before which the job may not begin processing In addition we obtain stronger performance guarantees if there are a limited number of distinct speeds Our results are based on a new linear programmingbased technique for estimating the speed at which each job should be run and a variant of the list scheduling algorithm of Graham that can exploit this additional information Introduction The study of performance guarantees for approximation algorithms can be traced back to the work of Graham who studied the following scheduling problem There are n jobs to be scheduled on m identical parallel machines where each j o b jj n requires pj uninterrupted units of processing on one of the machines and each m a c hine can process at most one job at a time furthermore there is a partial order on the jobs that speci es precedence constraints where jk means that the processing of job j must be completed by the time job k is started The objective is to minimize the length of the schedule that is the time by w h i c h all jobs have been completed Graham showed that a simple list scheduling rule nds a schedule of length within a factor of of optimum and no better constant g u a r a n tee is known today We shall say that an algorithm is a approximation algorithm if it is guaranteed to deliver a solution of objective function value within a factor of of optimal in polynomial time Our work concerns a natural generalization of this problem in which e a c h machine i runs at a speci ed speed si im and so if job j is processed by m a c hine i it takes pj si time units chudakcscornelledu School of Operations Research Industrial Engineering Cornell University Ithaca NY Research partially supported by NSF grants CCR    and DMI   􀀀 yshmoyscscornelledu 􀀀S c hool of Operations Research Industrial Engineering and Department of Computer Science Cornell University Ithaca NY Research partially supported by NSF grants CCR    DMS and ONR grant N    O􀀀 to be completed imj n s u c h parallel machines are called uniformly related Liu Liu showed that one could still analyze the list scheduling approach in this setting but they proved a performance guarantee that depends on the particular speeds of the machines and even for a xed numb e r o f m a c hines this guarantee can be arbitrarily large Jae re ned this analysis and showed that if one restricts attention to those machines whose speeds are within p a factor of m of the fastest machine and applies list scheduling using only those machines then p the resulting algorithm is an Om  approximation algorithm Prior to our work this was the best known performance guarantee for the problem of scheduling on uniformly related parallel machine subject to precedence constraints We shall provide an O log m  approximation algorithm for this problem Furthermore we shall provide a much stronger performance guarantee for instances in which there are a limited number of distinct machine speeds If all of the jobs are independent or in other words there are no precedence constraints then stronger performance guarantees are known Gonzalez Ibarra Sahni gave a simple approximation algorithm Hochbaum Shmoys gave a polynomial approximation scheme or in other words provided a approximation algorithm for each A result of Lenstra Rinnooy Kan implies that if there are precedence constraints then for each no approximation algorithm exists unless P NP The performance guarantees that have been obtained for machines that run at dierent s p e e d s h a ve t ypically matched those obtained for the special case in which the machines are identical This analogy would be complete if one could show that there exists a constant approximation algorithm or perhaps even a approximation algorithm for the precedence constrained problem on uniformly related parallel machines pimproving the best known performance guarantee in this case from O m t o O log m we By a r e taking a signi cant s t e p t o wards this goal Throughout this paper we shall use the following notation which follows the survey article of Graham Lawler Lenstra Rinnooy Kan For a given schedule Cj shall denote the completion time of job jj n Thus the length of the schedule is maxjn Cj and we shall denote this by Cmax We shall also be interested in minimizing the weighted sum of job completion times that is each j o b j has a given weight wj jn and the objective i s t o P minimize n wj Cj We shall also give a n O log m  approximation algorithm for this objective j p which improves upon an Om performance guarantee proved by S c hulz We shall also adopt the jj notation of Graham Lawler Lenstra Rinnooy Kan to specify scheduling problems we shall only be interested in the cases in which is either P or Q denoting identical parallel machines or uniformly related parallel machines respectively is a possibly empty subset of frj prec pmtng where rj indicates that each j o b j has a speci ed release date before which i t m a y not start processing prec indicates that there are precedence constraints and pmtn indicates that preemption is allowed that is the processing of a job may b e i n terrupted and continued later possibly in a dierent m a c hine indicates the objective function which will P be either Cmax or wj Cj F or example the main results of this paper are O log m  approximation P algorithms for the problems Qjrj prec jCmax and Qjrj prec j wjCj The list scheduling algorithm of Graham can be described as follows The jobs are listed in some order for example n The schedule is constructed in time whenever a job completes each i d l e m a c hine selects the rst as yet unscheduled job on the list for which all of its predecessors have been completed if no such job exists then the machine remains idle until the next job has been completed Our algorithm is based on the following simple ideas We consider only the machines whose speeds are within a factor of m of the fastest machine and partition the machines into log m groups so that within a group all machines have roughly the same speed for example within a factor o f o f e a c h other We then assign each job to one of these groups each job will be processed by one of the machines in its assigned group The schedule is then constructed by the following slight variant of the list scheduling algorithm each m a c hine may only select a job for processing if this job has been assigned to its group Of course the performance of the algorithm depends critically on the way i n w h i c h the job assignments are done Our assignment algorithm is based on solving a linear programming relaxation of the problem in order to determine an estimate of the speed at which e a c h job should be done then the nal assignment ensures that the job is processed at least at that speed while also assuring that the loads on the various groups are suciently well balanced In several respects this algorithm is similar to an on line O log m  approximation algorithm of Shmoys Wein Williamson for QjjCmax For their algorithm each job assignment was based simply on whether the job could nish on a machine in that group by a currently estimated deadline Our algorithm was also motivated by r e c e n t results of Hall Schulz Shmoys W ein and Chakrabarti Phillips Schulz Shmoys Stein Wein that employ l i n e a r P programming in the design of list scheduling algorithms for minimizing wj Cj In these algorithms the optimal solution to a linear programming relaxation was used to order the jobs in the list We P will also take a d v antage of these ideas in our algorithm for the wj Cj objective Finally w e consider the generalizations of these problems in which e a c h job has a speci ed release date rj before which i t i s n o t a l l o wed to begin processing For the Cmax objective this is not a substantially more general problem Shmoys Wein Williamson have g i v en a broadly applicable technique that takes a approximation algorithm for a scheduling problem jjCmax without release dates and converts it into a approximation algorithm for the generalization with release dates in fact the new algorithm is online in the sense that for each point in time t the algorithm schedules up to time t without knowledge of those jobs released at time t or later This technique is quite simple the jobs are scheduled in batches where the jobs in the next batch are simply those jobs that were released while the jobs in the previous batch w ere being processed Hence an immediate corollary of our work is an O log m  approximation algorithm for P the generalization with release dates For the wjCj objective we can incorporate the release date constraints into our linear programming formulation and obtain a relatively direct extension in this way A speedbased list scheduling algorithm In this section we will introduce our variant of the list scheduling algorithm that is the basis for our approximation algorithms for scheduling on uniformly related machines The proof of the performance guarantee of the list scheduling algorithm for identical parallel machines is quite simple Graham observed that any s c hedule produced by this algorithm has a chain of jobs jj jr such that a machine is idle only when some job in the chain is being processed The total processing requirement o f a n y c hain is a lower bound on Cmax the length of the optimal schedule Similarly the total length of time intervals in which all machines are busy is also a lower bound on Cmax Hence the total length of the schedule is at most Cmax If one considers the straightforward extension of this analysis to uniformly related parallel machines then the main diculty is that the rst lower bound no longer holds The time required to process a particular chain of jobs can only be lower bounded by the time required on the fastest machine and the list scheduling algorithm might n o t h a ve s c heduled these jobs on the fastest machine However if we know in advance at w hich speed to process eachjob thenthelength of time spent processing a chain can provide a lower bound once again Unfortunately i f w e a r e restricted to process jobs at particular speeds then we can no longer ensure that all machines are busy H o wever we can ensure something slightly weaker that at least all machines of one particular speed are busy and this is the central idea behind our algorithm Assume that there are mk machines of speed sk kK where ss sK furthermore for each j o b j j n l e t k j index the speed at which j o b j is to be processed For this assignment we can compute the total load Dk assigned to machines of speed sk that is Dk X pj mk j k j k sk We can also bound the length of each c hain of jobs let C be the maximum over all chains C of X pj skj j C Consider the following speed based list scheduling algorithm to schedule the jobs listed in the order n The schedule is constructed in time whenever a job completes we attempt to start a job on each m a c hine that is not currently processing a job We consider the idle machines in some given order For a machine of speed sk w e select the rst job j on the list that has not already been scheduled for which kj k and all of its predecessors have been completed if no such job exists then this machine remains idle until the next job has been completed The speed based list scheduling algorithm clearly produces a feasible schedule We will next show h o w to bound the length of this schedule s s j s j j Figure Scheduling by the speed based algorithm Theorem For any job assignment k j j n t h e s p eedbased list scheduling algorithm produces a schedule of length KX Cmax C Dk k Proof The proof of this theorem is a generalization of the analysis of Graham Consider the schedule produced by our algorithm as depicted in Figure We will partition the time from to Cmax into K parts Tk kK where each Tk consists of the union of disjoint time intervals Consider a job j that completes at time Cmax W e shall construct a chain that ends with j as follows for t iteratively de ne jt as a predecessor of jt that completes last in the schedule if jt does not have a n y predecessors then the chain C is complete Let T denote the periods of time during which some job in C is being processed Consider the complementary time periods which consist of a collection of disjoint t i m e i n tervals each o f w h i c h ends at a time that some job in C begins processing Consider the interval that ends when job j C begins processing and assign this interval to Tkj We h a ve n o w partitioned the time from to Cmax into the sets Tk kK First of all it is clear that the total length of T is at most C since the algorithm assigns each job j to be processed on a machine of speed skj jn Next we argue that during each time interval included in Tk no m achineofspeed skis idle kK The reason for this is simple Consider one such i n terval I assigned to Tk for some kK and the job j C that starts at its endpoint Our construction ensures that kj k and that all predecessors of j have completed by the start of I since the immediate predecessor j of j in C completes at the start of I and j was selected as the predecessor of j that is the last to complete Consequently i f a machine of speed skwere idle in I then job j would have b e e n s c heduled earlier by the algorithm Hence all machines of speed skare busy at each time in Tk If the total length of the intervals in Tkis tk then there must be skmktkunits of processing done on these machines during this time P However there are only kpjunits of processing to be performed on machines of speed sk jkj throughout the entire schedule Hence X tk pj Dk kK mksk jkj k PK Therefore we h a ve s h o wn that the length of the entire schedule is at most C k Dk An O 􀀀log m approximation algorithm for Qjrj prec jCmax p Inthissection we willshow how to assign jobs to m achinespeeds toyielda KK approximation algorithm for inputs in which there are K distinct machine speeds We then show how this algorithm can be modi ed to obtain an O log m  approximation algorithm in general Furthermore by a general result of Shmoys Wein Williamson this implies that there also is an O log m  approximation algorithm for the more general setting in which each j o b j has a speci ed release date rjbefore which i t m a y not begin processing As before suppose that there are mkmachines ofspeed sk where ss sK Then in any s c hedule of length Cmax the kth group of machines can perform at most mkskCmax units of processing If we s c hedule jobs on only the group of machines for which this capacity is largest then we are increasing the total load on this group by at most a factor of K Of course if this group of machines is also the slowest then we might greatly increase the length of time needed to process any c hain Hence we will rst compute a good estimate for the length of time that is reasonable to spend processing each job and then assign the job to be processed on the greatest capacity group of the machines that run at least that fast The estimates will be computed by solving a relaxation of QjprecjCmax P We w ould like to assign jobs to machine speeds in such a w ay that the upper bound C Kk Dk is not too large We will introduce a relaxation of the problem of computing an assignment that corresponds to a schedule of length D We shall use variables xkj to indicate whether or not job j is assigned to run at speed sk Since each job is to be assigned to some machine speed K X xkj jn k The total processing requirement assigned to machines of speed skdoes not exceed the capacity that can be processed by these machines by t i m e D Xn pjxkj Dk K mksk j To bound the length of each c hain we i n troduce variables Cj jn w h ic hare not constrained to be integer and represent completion times Clearly e a c h job cannot be completed earlier than the time required to process it on its assigned machine K X pj xkj Cj jn sk k Furthermore for each precedence relation jj the dierence Cj Cj must be at least the time spent processing j K X pj xkj Cj Cj if jj sk k Of course if the schedule is of length D e a c h job must complete by then Cj Dj n Finally the assignment v ariables are constrained to be xkj fg kKj n Hence a mixed integer programming relaxation of QjprecjCmax is as follows minimize D subject to that is the optimal solution to this mixed integer program gives a lower bound on the length of the optimal schedule Cmax For any assignment that corresponds to a feasible solution for this mixed integer program of objective function value D the speed based list scheduling algorithm will nd a schedule of length KD the constraints imply that the length of any c hain is at most D and the constraints imply that for each kK the total load Dk assigned to the machines of speed skis at most D In fact Theorem states something much stronger in order to obtain this bound we do not need an integer solution that satis es but rather just satis es the aggregated constraint Kn XX pjxkj KD mksk kj Our algorithm shall make use of the fact that this weaker inequality is sucient Suppose that we replace the constraints by xkj kKj n The linear programming problem minimize D subject to and shall be denoted LP This linear program can be solved in polynomial time and its optimal solution xkj k Kj n and Cj jn has an objective function value D that is a lower bound on the optimal schedule length Cmax This optimal solution to LP can be used to assign the jobs to machine speeds in a good way The assignment algorithm is quite simple For each j o b jj n w e rst compute the averaged length of time pjspent processing it by the optimal solution to LP K X pj pj sk xkj jn k Furthermore we l e t Bjindex the set of bad speeds that are too slow for job j Bj fkpj sk pjg jn For each jo b jj n w e let kj be the index kBjfor which pj skmk is minimized or equivalently the one for which skmkis largest For this assignment kjj n w e shall P show that C Kk Dk KD Since DCmax b y applying the speed based list scheduling algorithm to this assignment we h a ve obtained a K  approximation algorithm We rst bound the length of any c hain under the assignment kjj n Lemma The assignment algorithm computes values kjj n such that for each chain of jobs C X D pj skj j C Proof Consider a chain C consisting of jobs jj jr To prove the lemma we will rst show that X pj D j C Then since our algorithm ensures that each j o b j is processed by a m a c hine kj on which it ta k es at most pjtime units we obtain the claimed bound Since x satis es pj Cj F urthermore x satis es and so since ji ji ir 􀀀􀀀 we have that pj Cj Cj ir I f w e add these constraints we see that i ii 􀀀 X pj Cj r j C Finally b y Cjr D and hence holds We next bound the total load implied by our assignment kjj n Lemma Let xbkj if k kjj n and let xbkj otherwise Then Kn XX pjxbkj KD mksk kj P Proof Consider the problem of optimizing over a simplex minimize Nj cjyj subject to P N yj yj jN In this case it is easy to see that there exists an optimal j solution y that is integer nd j such that cj minjcj and set yj and set all other components of y to Similarly the solution xb is an optimal solution to the following linear program Kn XX Minimize pjxkjmksk kj subject to K X xkj j n k xkj if k Bj j n xkj k K j n We shall exhibit a feasible solution xe to this linear program for which the objective function value is at most KD hence the objective function value of xb is also at most KD w h i c h implies the lemma We shall construct the claimed feasible solution by applying the ltering technique of Lin P Vitter to x the optimal solution to LP Let j k Bj xkj note that j since at most half of a weighted average can be more than twice the average We then set if k Bj exkj xkj otherwise j It is straightforward to see that ex is a feasible solution to Furthermore exkj xkj f o r each k K j n Hence from we h a ve that nX pjxekj Dk K mksk j This implies that xe has objective function value at most KD since xb is an optimal solution ittoo must have objective function valueatmost KD PK By combining Lemmas and we see that for our assignment kjj nC k Dk KD In fact this argument does not properly balance the contribution of chain and overall machine loads Instead we should modify the de nition of Bj b y replacing the by pK that is Bj fk pj sk pK pjg j n We then use this de nition of Bjto nd an assignment k j j n as before let k j b e the index kBjfor which skmkis largest After modifying the de nitions in this way w e then see that a proof exactly analogous to that of Lemma yields the following claim Lemma The assignment algorithm computes values kjj n such that for each chain of jobs C X p KD pj skj j C If one traces through the proof of Lemma with this modi ed de nition of Bj then one nds pp that jis now greater than KK Consequently equation becomes p Xn K pjxekj p Dk K mksk K j and hence we obtain the following analogue of Lemma Lemma Let xbkj if k kjj n and let xbkj otherwise Then Kn XX p pjxbkj K KD mksk kj By combining these lemmas with Theorem we h a ve obtained the following result Theorem The assignment algorithm combined with the speedbased list scheduling algorithm p yields a KK approximation algorithm for QjprecjCmax w here K denotes the number of distinct machine speeds Furthermore the proof of this theorem actually implies the following somewhat stronger claim Corollary The assignment algorithm combined with the speedbased list scheduling algorithm p produces a schedule for which Cmax is within a factor of KK of D the optimal value to the linear program LP Our next goal is to show t h a t w e can reduce the general problem to the case in which there are at most log m distinct speeds We can assume that the speeds of the m machines si im actually consist of K distinct speeds ss sK Of course in the worst case Km We will show that for these machine speeds we can adjust them downwards so that there are at most log m remaining distinct speeds and yet the optimal value of the linear program LP is increased by at most a factor of by this modi cation Hence by Corollary we obtain an O log m  approximation algorithm for QjprecjCmax This modi cation of the speeds consists of two steps in which w e rst eliminate all machines whose speed is more than a factor of m slower than the fastest machine and then simply round down each remaining speed to the nearest power of By ensuring that each m a c hine is modi ed to run no faster than its true speed we guarantee that any s c hedule for the modi ed instance can be interpreted as a schedule of no greater length for the original instance Let k index the smallest speed that is greater than m consequently sk mk kK We shall call the machines of speed greater than m fast and the remaining machines slow We will show that all work assigned to slow m a c hines can be reassigned to the group of fastest machines Since there are fewer than m slow m a c hines the total speed of the slow machines is less than which is equal to s In essence this means that any m a c hine of speed can dothework of all oftheslow m a c hines fo r a n y feasible solution to LP xkj k Kj nCj jn and D w e can construct while taking at most twice as long More precisely a feasible solution xe of objective function value at most D that uses only the fast machines We e set DD andforeach jo b jn w e set KX xe j x j xkj k e xekj xkj kk and Cj Cj This is a feasible solution to LP for the instance in which only the fast machines remain Hence we m a y assume that each machine speed sk m For each machine i round down its speed to the nearest power of each machine of speed si kk is simulated by a m a c hine of speed k Another simple calculation shows that the solution xekj kk D j nCe j jn and De is a feasible solution to LP for this rounded data There are at most log m distinct speeds after this rounding Hence we h a ve obtained a p log m log m  approximation algorithm for QjprecjCmax The construction above to reduce the general case to one in which there are few distinct speeds is not optimized to produce the best leading constant in the resulting performance guarantee For example we can round all speeds less than m d o wn to and round all speeds in the interval kk k d o wn to f o r a n y c hoice of a n d The same calculations imply that there are log m distinct speeds resulting and the rounding increases the optimal value of the linear program LP by afactorthatisatmost Ifwe set e and log m and we should emphasize at this point that we h a ve been using log m to denote log m then the p performance guarantee becomes log m log m Thus we h a ve established the following theorem p Theorem If K is the number of dierent machine speeds there i s a minfKK log m p log m g approximation algorithm for QjprecjCmax In fact we h a ve shown that given a feasible solution to LP with objective v alue D there is a ppolynomial time algorithm that nds a schedule of length at most minfKK log m p log m gD W e will use this fact in the next section Now consider the generalization of this scheduling problem in which e a c h job jj n has a release date rj before which i t m a y not begin processing As mentioned earlier we can apply a general result of Shmoys Wein Williamson that given our approximation algorithm for QjprecjCmax produces an algorithm for Qjrj prec jCmax w i t h t wice the performance guarantee One must be a bit careful in applying this technique since it is necessary that jj implies that rj rj but this is easy to ensure without loss of generality pp Corollary For Qjrj prec jCmax there is a minfKK log m log m gapproximation algorithm where K is the number of distinct machine speeds Next notice that our linear programming relaxation is also a relaxation for the variant of the problem in which preemption is allowed As a consequence we obtain the following corollary which p improves upon the Om  approximation algorithm of Jae pp Corollary For Qjprec pmtnjCmax there is a minfKK log m log m gapproximation algorithm where K is the number of distinct machine speeds Furthermore since our algorithm produces schedules that do not preempt any jobs we h a ve established the following interesting relationship Corollary For any instance o f QjprecjCmax the ratio between its nonpreemptive optimal value and its preemptive optimal value is O log m We will show next that we can not obtain a constant performance guarantee by relying only on LP for the lower bound used in the proof of theorem Perhaps more surprisingly e v en the integer programming version that is where the constraints are replaced by is itself a substantial relaxation of the original problem QjprecjCmax The next theorem shows that in comparing the optimal schedule length Cmax to the optimal value D of this integer program our analysis is nearly tight Theorem There a r e instances of QjprecjCmax for which Cmax log m log log mD Proof Consider the instances of QjprecjCmax de ned in the following way Let m lo g and W e shall set Dm but this merely scales the time units in this construction There are m machines of speed s m m machines of speed sm and m machines of speed sm Thus thenumber of m achines m satis es mm m which implies that log m log log m and log m log log m Notice also that mkskkmk There are groups of jobs Ji i The rst group contains jobs of processing requirement mD the second contains jobs of processing requirement mD and so forth through the last group that contains jobs of processing requirement mD F or each jo b jJi andeach jo b jJi i we h ave that jj If each job in the group Ji is exactly assigned to be processed by a m a c hine of speed si for each i w e obtain a feasible solution to the integer program with objective v alue D hence D OD We sh ownextthateach Ji i needs D time to complete thus given the structure of the precedence graph each s c hedule completes at time D D log m log log m which p r o ves the theorem Consider any s c hedule for Ji for some i Since we n e v er use more machines than jobs we can assume that we are only using the fastest jJij ii machines Since the numb e r o f machines of speed si is greater than jJij w e will only be using machines of speeds ss i Hence the total speed of all machines used to process Ji is at most ii XX ki mk sk mm kk Since the total processing requirement i s mD ii imD i any preemptive or nonpreemptive schedule for Ji is of length at least imD D i m In considering the gap between the O log m performance guarantee and the log m log log m lower bound in Theorem we suspect that it might be possible to improve our algorithm In particular we think that it might be possible to round the data for any instance to one with K distinct speeds while increasing the optimal value of the linear program LP by a factor of at most where KO log m log log m Of course one advantage of our analysis is that whenever one can round the input so that K is small there is an improved performance guarantee implied as well P An extension for Qjrj prec j wjCj P Inthissection we g iv e a n O log m  approximation algorithm for Qjrj prec j wj Cj which is based on combining our algorithm for QjprecjCmax with a technique for the special case in which the P machines are identical P jrj prec j wjCj that was introduced by Hall Schulz Shmoy s W ein We shall start by describing the key points of the result of Hall Schulz Shmoys Wein for identical machines First they subdivide the time horizon into the following intervals of potential job completion times Suppose that we know just the interval that contains Cj for each jn where Cj denotes the completion time of job j in some optimal schedule this rough information can be used to derive a s c hedule in which e a c h job j completes by time Cj jn We shall construct a fragment of the schedule for the jobs that complete in the interval b y the standard list scheduling algorithm ignoring release dates Grahams analysis ensures that the length of this fragment i s a t m o s t Each j o b j for which Cj must also have rj Hence if we actually run the fragment for the th interval during the time period we h a ve obtained a feasible schedule Furthermore if Cj then it completes in this schedule by which implies that the algorithm is an  approximation algorithm Of course we do not know the interval in an optimal schedule in which e a c h job nishes Hall Schulz Shmoys Wein show that a linear programming relaxation can be used to provide P sucient estimates and in fact give a  approximation algorithm for P jrj prec j wjCj We shall extend their technique to uniformly related machines in a natural way W e shall use a linear relaxation in order to estimate the interval in which e a c h job completes This assigns a set of jobs to each i n terval which will be scheduled using our algorithm of Section This fragment of the schedule will then be run in a shifted time period which is suciently long and starts suciently late so as to satisfy all release date constraints As in the result of Hall Schulz Shmoys Wein we shall use an interval indexed linear programming relaxation in order to estimate the interval in which e a c h job completes Let ssKdenote the distinct speeds in our input we shall assume without loss of generality that pj sk for each kKj n L et P L where L log max jrjjpj sK We shall rely on decision variables xkj which indicate whether or not job j completes on a machine of speed skin the th interval from to Since we will bound the length of each f r a g m e n t b y demonstrating a feasible solution to the corresponding instance of LP w e shall also introduce the variables Cj jn which will be used to bound the length of the chains Next we describe our linear programming relaxation Since Cjcan be thought of as being the nishing time of job j the objective i s Xn Minimize wjCj j All the jobs must be assigned to run at some machine speed KL XX xkj jn k The capacity constraints up to time can be written as follows Xn X pj xkjt kK L skmk jt The time required to execute a job cannot exceed the period of time between its release and its completion KL XX pj xkj Cj rj jn sk k As in LP for precedence constraints we require KL XX pj xkj Cj Cj if jj sk k In addition if jj t h e i n terval in which j completes cannot precede the one in which j completes KX X KX X xkjt xkj t if j j L t k t k Since the nishing time of job j is certainly greater than the left endpoint o f t h e i n terval in which it completes LK XX xkj Cj jn k And nally w e impose xkj kKjn L We rst solvethe linearprogram let xkj kKj n and Cj j n denote an optimal solution We shall assign each j o b j to an interval in the following way which builds on the half interval technique introduced by Hall Shmoys Wein First for jn le t j denote the minimum value of for which K XX xkjt t k Next let j denote the minimum value of for which Cj We set j  m ax f jj g and J fjj g L We shall construct a schedule for each subset J separately b y applying the algorithm from the previous section In order to bound the length of the fragment of the schedule for J w e will show that the optimal solution x and C can be used to de ne a feasible solution to the instance of LP for this subset of jobs Note that the constraints ensure that if we concatenate the fragments for JJ J L and each fragment satis es the precedence constraints among the jobs scheduled within it then the resulting schedule satis es all precedence constraints For each job j j n let jdenote the total fraction of job j that is completed in the fractional solution x o ver all speeds in the rst j i n tervals By our de nition of j j j n W e set jX exkj xkj j k K j n ee Fix some L and consider the jobs in J If w eset Cj Cj jJ and D then ee we shall argue that xe C and D constitute a feasible solution for LP for this subset of jobs Clearly ex satis es the assignment constraints Since each j and x satis es we h a ve t h a t ex and eC satisfy similarly the constraints ensure that are satis ed and the constraints ensure that are satis ed Finally for each j o b j J we h a ve that jj ee Cj and hence Cj D Theorem implies that we can construct a schedule fragment f o r J of length K where K is the performance guarantee proved for our approximation algorithm for the Cmax objective This fragment shall be run from time K to K Since K and rj Cj for each job jJ w e are assured that the resulting schedule satis es the release date constraints It remains to show that we h a ve s c heduled each j o b j to complete at a time not much greater than Cj we shall show that j Cj jn Since each j o b jJ completes by j time K this will imply that we h a ve derived a K approximation algorithm We consider two cases If jj then Cjj and hence j Cj In the remaining case jjj Then KL XX 􀀀 j 􀀀 j xkj k 􀀀 j KL XX xkj k 􀀀 j KL XX xkj k Cj Hence j Cj a n d w e h a ve p r o ved the following theorem P Theorem There i s a p olynomialtime algorithm for Qjprec pof objective function value within a factor of minfK K optimum where K is the number of distinct speeds rjj wjCj that log m nds a schedule plog m g of the Our linear programming relaxation is also a relaxation for the variant of the problem in which preemption is allowed Consequently w e obtain the following corollaries P pp Corollary For Qjprec rj pm tn j wjCj there is a minfKK log m log m gapproximation algorithm where K is the number of distinct speeds P Corollary For any given instance o f Qjprec rjj wjCj t h e r atio between its nonpreemptive optimal value and its corresponding preemptive optimal value is O log m Acknowledgments We are grateful to Joel Wein for several helpful discussions References Chakrabarti S C Phillips A S Schulz D B Shmoys C Stein J Wein Improved scheduling algorithms for minsum criteria In F Meyer auf der Heide and B Monien eds Au tomata Languages and Processing Proceedings of the rd International Colloquium ICALP Lecture Notes in Computer Science Springer Berlin Gonzalez T OH Ibarra and S Sahni Bounds for LPT schedules on uniform processors SIAM J Comput Graham R L Bounds for certain multiprocessing anomalies Bell System Tech J Graham RL EL Lawler JK Lenstra AHG Rinnooy Kan Optimization and approx imation in deterministic sequencing and scheduling a survey Ann Discrete Math Hall LA A S Schulz DB Shmoys J Wein Scheduling to minimize average completion time oline and online approximation algorithms Technical Report School of Operations Research and Industrial Engineering Cornell University Ithaca NY USA Hall L A D B Shmoys J Wein Scheduling to minimize average completion time o line and on line algorithms Proceedings of the th Annual ACMSIAM Symposium on Discrete Algorithms Hochbaum DS DB Shmoys A polynomial approximation scheme for scheduling on uniform processors using the dual approximation approach SIAM J Comput Jae J Ecient s c heduling of tasks without full use of processor resources Theoret Comput Sci Lenstra JK A HG Rinnooy Kan Complexity o f s c heduling under precedence constraints Oper Res Lin JH JS Vitter approximation with minimum packing constraint violation Pro ceedings of the th Annual ACM Symposium on Theory of Computing Liu JWS CL Liu Bounds on scheduling algorithms for heterogeneous computing sys tems in JL Rosenfeld ed Information Processing North Holland Schulz AS Scheduling and Polytopes PhD thesis Technische Universit!at Berlin Berlin Germany Shmoys DB E Tardos An approximation algorithm for the generalized assignment problem Math Programming Shmoys DB J Wein DP Williamson Scheduling parallel machines on line SIAM J Comput