A New Approach to Computing Optimal Schedules for the JobShop Scheduling Problem Paul Martin and David B Shmoys School of Operations Research and Industrial Engineering Cornell University Ithaca NY Abstract From a computational point of view the job shop scheduling problem is one of the most notoriously intractable NP hard optimiza tion problems In spite of a great deal of substantive research there are instances of even quite modest size for which i t i s b e y ond our cur rent understanding to solve to optimality W e propose several new lower bounding procedures for this problem and show h o w to incorporate them into a branch and bound procedure Unlike almost all of the work done on this problem in the past thirty y ears our enumerative procedure is not based on the disjunctive graph formulation but is rather a time oriented branching scheme We s h o w that our approach c a n s o lv e m o s t of the standard benchmark instances and obtains the best known lower bounds on each Introduction In the jobshop scheduling problem we are given a set of n jobs J a set of m machines M and a set of operations O Each job consists of a chain of operations let Oj be t h e c hain of operations required by job j whereas Oi is the set of operations that must be processed on machine i E a c h operation k requires a specied amount of processing pk on a specied machine Mk A schedule is an assignment of starting times to the operations such t h a t e a c h operation runs without interruption no operation starts before the completion time of its predecessor in its job and such that each machine processes at most one operation at any m o m ent in time The goal is to nd the schedule which minimizes the makespan the time at which the last operation completes In this paper we focus on the special case in which e a c h job has exactly one operation on each m a c hine Therefore jOj j m for each j J The main ideas of this paper are easily extended to the general case The feasibility problem is given a time T to determine whether there exists a schedule that completes by time T Throughout this paper we consider the feasibility problem and use a bisection search to reduce the optimization problem to the feasibility problem * Research supported in part by the NEC Research Institute and in part by NSF grant CCR  ** Research supported in part by NSF grant CCR  and DMS   and ONR grant N    O The jobshop scheduling problem is NP hard and has proven to be very di cult even for relatively small instances An instance with jobs and machines posed in a book by Muth Thompson remained unsolved until Carlier Pinson nally solved it in and there is a job and machine instance of Lawrence that remains unsolved despite a great amount of eort that has been devoted to improving optimization codes for this problem The most e ective optimization techniques to date have been branchand bound algorithms based on the disjunctive graph model for the jobshop schedul ing problem The decision variables in this formulation indicate the order in which operations are processed on each m a c hine Once these variables are set the time at which e a c h operation starts processing can be easily computed Algo rithms of this type have been proposed by Carlier Pinson Brucker Jurisch Sievers and Applegate Cook Although solving jobshop scheduling problems to optimality is di cult re cently there has been progress developing heuristics that nd good schedules Adams Balas Zawack proposed the shifting bottleneck procedure w h ich uses a primitive form of iterated local search to produce substantially better schedules than were previously computed Currently all of the best known al gorithms use iterated local search The current c hampions are a taboo s e arch algorithm by N o wicki Smutnicki a n d a t e c hnique called reiterated g u id e d local search by Balas Vazacopoulos Vaessens Aarts Lenstra p r o vide an excellent survey of recent d e v elopments in this area There has been less progress in nding good lower bounds for the jobshop scheduling problem The job bound is the maximum total processing required for the operations on a single job Similarly the maximum amount of processing on any one machine gives the machine bound W ecan improve upon the m achine bo u n d b y considering the onemachine subproblem obtained by relaxing the ca pacity constraint on all machines except one This problem is the onemachine scheduling problem with heads and tails for each operations j rj j Lmax in the notation of Here each operation k O i has a head rk equal to the total processing requirement of all operations which precede k in its job a tail qk the analogous sum for operations which f o l l o w k and a body pk the amount of processing required by k Each operation k may not begin processing before rk and must be processed for pk time units if Ck denotes the time at which k completes processing the length of the schedule is maxk 􀀀Oi fCk qkg The onemachine bound of Bratley Florian Robillard is the the maximum over all machines i M of the bound for the onemachine subproblem for i This subproblem is NP hard although in practice it is not di cult to solve The preemptive onemachine subproblem is the relaxation of the onemachine subproblem obtained by relaxing the constraint that each operation must be processed without interruption Jackson g i v es an On log n algorithm for computing the optimal schedule for this subproblem Carlier Pinson g i v e an algorithm that strengthens the preemptive onemachine bound but maintains polynomial computability Their algorithm updates the heads and tails of oper ations to account for the fact that we are looking for a nonpreemptive s c hedule In Section we s h o w that their updates can be seen as adjusting each opera tions head or tail to the earliest time that the operation can start such t h a t it processes continuously and such that there is a preemptive s c hedule for the remaining operations on that machine Fisher Lageweg Lenstra Rinnooy Kan examine surrogate duality while Balas and Applegate Cook attempt to nd bounds by examining the polyhedral structure of the problem Neither approach has proven to be practical so far The strongest computational results of Applegate Cook use only combinatorial bounds in their branchandbound algorithm In this paper we present new lower bounding techniques based on a time oriented approach to jobshop scheduling Recently there has been signicant success applying timeoriented approaches to other scheduling problems The rst approach w e propose is based upon the packing formulation a tim e indexed integer programming formulation of the jobshop scheduling problem Here we attempt to pack s c hedules for each job so that they do not overlap on any machine The bound we obtain is based on the fractional packing formulation which is the linear relaxation of the packing formulation This approach w as motivated by the packing approximation algorithm of Plotkin Shmoys Tardos along with an application to scheduling given by Stein This bound gives stronger bounds than those obtained by other linear programming relaxations but appears to be too time consuming To improve on the packing bound we attempt to restrict the time in which operations are allowed to start processing We c a l l t h i s i n terval a processing win dow The CarlierPinson head and tail modication algorithm can be used in an iterated fashion to reduce each operations processing window In addition to reducing processing windows for use with the fractional packing formulation the iterated CarlierPinson can itself be a bound since if the feasible processing window i s e m p t y then we h a ve s o l v ed the feasibility problem no feasible schedule of length T exists Iterated CarlierPinson gives results that are approximately the same as those obtained from the fractional packing formulation and it ob tains these bounds in only a fraction of the time Combining the two t e c hniques provides only a slightly improved bound In order to reduce the size of the processing windows we i n troduce a tech nique called shaving Here we suppose that an operation starts at one end of its processing window and then try to determine whether we can prove that there is no feasible schedule satisfying this assumption If we can prove this then we c a n conclude that the operation cannot start at that end of its processing window and we can therefore reduce its window This technique depends on our ability to determine that there is no schedule after restricting the processing window of one operation When we use the preemptive onemachine relaxation to deter mine that there is no schedule we call it onemachine shaving W e prove that this is equivalent to iterated CarlierPinson When we use the nonpreemptive onemachine relaxation we call it exact onemachine shaving andwe show that this is only a slightly better bound then iterated CarlierPinson but signicantly more costly to compute If we use iterated CarlierPinson to show that there is noschedule we call it CP shaving This bound is signicantly better than iter ated CarlierPinson but requires more time to compute Finally if we use CP shaving to show that there is no schedule then we c a l l i t double shaving This bound is excellent but takes a substantial amount of time to compute much t o o long to be practical in most branchandbound schemes Traditional jobshop branchandbound schemes are not particularly well suited to a timeoriented lower bound We g i v e t wo new schemes designed for use with a processing window approach The rst is a simplistic approach w h i c h starts at time zero and either assigns an available operation to start immediately or delays it This approach w orks well only if through the use of one of the shaving techniques we h a ve been able to greatly reduce the processing windows The second technique works by examining a machine over a time period where we know that there is very little idle time We can then branch o n w h i c h operation goes rst in this time period This technique is very eective on a wide range of problems It is particularly eective when the onemachine bound is tight a situation in which m a n y other algorithms seem to perform poorly Using these techniques we w ere able to improve upon the best known lower bound for several of the standard benchmark problems and in some cases we establish optimality W e h a ve tested all of our algorithms on the standard bench mark problems We h a ve used all the problems for which V aessens Aarts Lenstra g i v e recent local search results as well as three larger problems from Adams Balas Zawack All computations were performed on a MHz Pen tium computer Table gives the bounds that we w ere able to compute along with the amount of time in seconds that it took to compute them One important advantage of our timeoriented approach is that we can triv ially extend it to incorporate additional constraints such as release dates and deadlines for jobs as well as internal release dates and deadlines for operations In fact one would expect that adding these constraints would improve the per formance of our algorithms A P acking Formulation In spite of a great deal of eort the disjunctive i n teger programming formulation of the jobshop problem appears to be of little assistance in solving instances of even moderate size furthermore its natural linear programming relaxation has been shown to give v ery poor lower bounds for the problem We propose a new linear programming formulation which g i v es better bounds A jobschedule is an assignment of starting times to the operations of one job such that no operation starts before its predecessor completes and such t h a t all operations of that job complete by t i m e T The jobshop feasibility problem can then be restated does there exist a set of jobschedules one for each j o b such that no two jobschedules require the same machine at the same time This gives rise to the following packing formulation where we attempt to pack j o b schedules so as to minimize the maximum number of jobs concurrently assigned to some machine For each jo b j J let Fj denote the set of all jobschedules and let xj be a variable that indicates if j is scheduled according to jobschedule F j Let it indicate whether jobschedule requires machine i at time t and let be the maximum number of jobs concurrently on any m a c hine Then the integer programming formulation is to minimize subject to X xj for all j J 􀀀Fj X X i t xj for all i M t T j 􀀀J 􀀀Fj xj f g for all j J F j There is a feasible jobshop schedule which completes by T if and only if the optimal value of the above i n teger programming problem is A natural linear programming relaxation can be obtained by replacing by xj for all j JF j l e t denote its optimal value If we s h o w that then we h a ve proved that there is no schedule with makespan at most TT i s a v alid lower bound for this instance One natural way t o s h o w t h a t is to exhibit a feasible dual solution of value greater than We w i l l g i v e the dual linear program in a compact form which is easy to derive The dual variables are yit i M tT where each can be viewed as the cost of scheduling on machine i at time t T h us the PPT cost of F j is C y ityit Ifwe let Sfy i 􀀀M t PPT P g then the dual is maxy 􀀀S min Cy i 􀀀M t yit j 􀀀J􀀀Fj Solving the packing relaxation The packing formulation and its linear relax ation contain an exponential numbe r o f v ariables fortunately fractional packing problems can be approximately solved by algorithms that are insensitive t o t h e numbe r o f v ariables Plotkin Shmoys Tardos p r o vide a framework for obtaining solutions within a specied accuracy for such problems We view the relaxation as the fractional packing problem where we are at tempting to pack jobschedules subject to a restriction on the amount o f w ork on each m a c hine at each time unit Let P j be the set consisting of all convex combinations of the jobschedules for job j and let PP P n T hen our relaxation can be rewritten as minfj x Px satises g T o apply the algorithm of we need to be able to e ciently evaluate the dual objective for a given y and to e ciently optimize over P To e v aluate the dual we need to nd the minimum cost jobschedule for each jo b j J This can be done by a simple dynamic programming algorithm We look at eachoperation k O j in processing order and for each possible completion time t we calculate ck t the minimum cost of k and all operations preceding k in Oj such that k completes by t T hen if l is the last operation of job jcl T gives the minimum cost jobschedule for j with respect to y W e can then use this dynamic program to nd an optimal solution in P The fractional packing algorithm works as follows Start with some xP and an error parameter Calculate for x i f then we are done Otherwise compute yit as an exponential function of the load on machine i at time t scaled appropriately Then apply the dynamic programming algorithm to evaluate y and to nd the point xP of minimum cost with respect to y If the dual value exceeds or is within a factor of the primal value then stop Update x by taking a convex combination of x and x and start over If the dual value is greater than when we terminate then we can conclude that T i s a v alid lower bound Otherwise we can not determine a lower bound The running time of the algorithm is proportional to T and Computational results The times that we report in Table include only the time that our packing algorithm uses to show that the bound is valid it does not include any time to search for the correct value of T W e cut o our algorithm after iterations so there is no guarantee that the bounds we h a ve g i v en are the best possible For a given T w e are trying to determine whether When is very close to this can be di cult the number of iterations required to determine whether increases dramatically as approaches The bounds found by t h e p a c king algorithm are signicantly better than other bounds in the literature based on LPrelaxations such as the cutting plane approach of Applegate Cook The algorithm however is still to slow t o b e of practical signicance Processing Windows When considering an instance of the feasibility problem with target makespan T w e can view each operation as constrained to be started within a given time interval For example consider an operation k O i and let rk and qk de note respectively its head and tail in the corresponding onemachine instance of jrjjLmax that is rk is the total processing requirement of operations pre ceding k in job j and qk is the analogous sum for operations following k in j Then the job bound implies that the starting time of operation k must occur in the interval rk Tqk pk In the packing formulation we consider only jobschedules in which e a c h operation starts in this window We can strengthen this formulation by computing restricted processing windows that is for each operation k specify a smaller interval uk v k within which the operation must begin processing In this section we describe several approaches to compute restricted process ing windows by relying on bounds stronger than the job bound In particular we r e l y o n l o wer bounds given by the onemachine subproblem In describing our algorithms we focus only on increasing the start of the processing window since updating the end of the processing window can be done analogously due to the symmetry of the problem Onemachine shaving We shall rst consider the onemachine relaxation in order to compute improved processing windows Focus on machine i and suppose that we are given a proposed processing window uk v k for each operation k O i w e wish to decide if machine i can be s c heduled so that each operation starts within its designated window It is equivalent to decide if there is a feasible schedule of the onemachine subproblem in which k has head uk body pk and tail Tpk vk Hence we can rephrase the problem of computing a reduced processing window to that of adjusting heads and tails which w as an idea rst introduced by Carlier Pinson We can exploit the fact that the problem jrj pmtnjLmax can be solvedin polynomial time and in fact admits a nice minmax theorem For any S P Oi le t pS kSpk rS minkSrk and qS  m in kSqk Since the rst job in S can start no earlier than rS and the tail of the last job of S to complete processing is at least qS the length of the schedule is at least rS pS qS Remarkably Carlier Pinson have s h o wn that minimum length of a preemptive s c hedule is equal to maxS Oi frS pS qS g Consider an operation k O i with current processing window uk v k we shall maintain the invariant that if there is feasible schedule for the jobshop instance of length T then there is a schedule in which e a c h operation starts within its current window We shall compute qk the maximum tail for operation k for which there still is a feasible preemptive s c hedule for the onemachine subproblem where the other data are unchanged Operation k is processed without interruption in any jobshop schedule therefore its window m a y b e updated to Tqk pk v k While one could merely apply a bisection search t o n d qk for each k Oi w e can instead apply the following more e cient procedure OneMachine Shave w h i c h is based on the minmax theorem above For each ordered pair of P operations bc O i rst compute Pbc kSpk w h ere S fk O i j rk rb and qk qcg p r o vided S If rb Pbc qc T then there does not exists a feasible jobshop schedule of length T F or each operation k O i and each ordered pair bc su chthat i qk q c ii rk rb and iii rb Pbc pk qc T update uk maxfuk r b Pbcg Lemma OneMachine Shave correctly updates the processing window of each operation on the machine in On time Once we h a ve updated the starting point o f e a c h i n terval we m ust also update the ending point We can then repeat these updates until no further changes can be made for this machine CarlierPinson updates Carlier Pinson g a ve the rst procedure to update heads and tails for the onemachine problem An ascendent set kS consists of an operation k and a set of operations Sk S such that rS pS pk qS T rk pS pk qS T Ascendent sets are important because condition implies that in any feasi ble schedule k is processed rst or last among S f kg and condition implies that k cannot be processed rst Therefore if there is a feasible schedule k must be processed last Once we nd an ascendent s e t kS we can update uk maxfrk cS g where cS  m ax frT pT jTSg is a lower bound on the time at which the operations in S can complete processing Carlier and Pinson g i v e a n On algorithm based on Jacksons algorithm to solve jrj pm tn jLmax that nds ascendent sets and updates all of the heads or all of the tails for a onemachine problem We can use their algorithm directly on the onemachine relaxation to improve the processing windows Recently Carlier Pinson have given an On log n implementation of this algorithm Once again we need to apply this procedure alternately to heads and tails until no further updates are possible Surprisingly t h e t wo algorithms One Machine Shave and CarlierPinson Updates yield the same result Theorem For any onemachine subproblem applying CarlierPinson Updates until no more u p dates can be found is equivalent to applying OneMachine Shave until no updates are f o u n d Another On algorithm for performing CarlierPinson updates We shall give another algorithm to compute CarlierPinson updates by c o m bining ideas from the previous two algorithms which w e will see in x is faster when solving a sequence of closely related problems We can perform all updates for theascendent sets in On t i m e b y repeatedly applying an On algorithm that performs updates for all ascendent sets lS with a xed value of qS Suppose that we wish to nd all ascendent sets lS in which qS qk kS W e rst construct a nested list of candidates for the set S S et S fljrl rk q l qkg Find the operation l O i S with maximum head length rl If ql qk then create a new set SS f lg C o n tinue in this manner iteratively choosing the operation with the next largest head and creating a new set whenever its tail is at least qk L e t S Sb be the sets constructed The construction ensures that r S r S r Sb Finally a s w e construct each Si also compute c Si i b Let L denote the set of operations l O i for which ql q k L e t l be the operation in L for which pl is maximum I f l S b does not satisfy condition b Sb then each lS lL also does not satisfy it by t h e c hoice of l S o can be eliminated as a candidate set If lS b does not satisfy condition then each lS i ib also does not satisfy it and so l can be removed from L If both ascendent set conditions are satised then lS b is an ascendent set We can then increase the head of l to cSb We can remove l from L since Sb has the latest completion time of all candidate sets We can continue this process removing either one operation or one set until one list is empty W e h a ve n o w found all updates for ascendent sets lS w ith qS qk Once again we m ust repeatedly apply this algorithm until no further updates are made Iterated onemachine window reduction When we update the processing window uk v k for an operation k O j this can rene the windows of other operations of job j F or example if we increase the start of k s interval from uk to uk then its successor in j cannot start until uk pk Hence updating windows on machine i can cause further updates on all other machines This approach w as used by Applegate Cook Brucker Sievers Jurisch and by Carlier Pinson a s w ell although this is apparent only from their computational results We shall refer to the procedure of iteratively updating the processing windows for one machine and propagating the implications of these updates through each job until no further updates can be made as an iterated onemachine windows reduction algorithm Exact onemachine shaving We can also base the update of the processing window for an operation k O i on nding the maximum tail qk such that there exists a feasible nonpreemptive s c hedule for this onemachine subproblem Once again after computing qk w e can update the start of k s interval to maxfuk T pk qkg T o compute qk requires solving several instances of jrj jLmax while this is an easy NP hard problem it is still fairly timeconsuming to compute qk This technique is interesting because it gives the maximum window reduction obtainable by considering only the onemachine relaxation This technique does only slightly better than the techniques based on the preemptive model and requires much more time to compute Computational results By simply reducing some windows until they are empty iterated CarlierPinson provided lower bounds that are approximately the same as those found by the packing algorithm and it obtained those bounds in less than of a second for each problem tested This time includes the time for a search to determine the correct value of T So this algorithm is several orders of magnitude faster than the packing algorithm Exact onemachine shaving gave better bounds for several of the smaller problem that we examined but it gave virtually no improvement on the larger problems and is signicantly slower The running times include a bisection search be t ween a lower bound provided by iterated Carlier Pinson and an upper bound provided by Applegate Cooks implementation of the shifting bottleneck p r o cedure The time to compute the initial bounds is not included in the run ning times The exact onemachine shaving results were obtained using the one machine scheduling code of Applegate Cook and no eort was made to opti mize performance for our application Shaving We shall show h o w to strengthen the ideas from OneMachine Shave to further reduce the size of the processing windows For some operation k O and its operation window uk v k w e attempt to reduce the window b y s h o wing that the operation could not have started at one end of the interva l I f w e assume that the operation starts at one end of the interval and show that this implies that there is no feasible schedule then we can conclude that the operation cannot start then Thus we can shave a portion of the window b y applying this idea Carlier Pinson h a ve independently proposed this idea a preliminary version of our results for this was announced in We shall describe our algorithm only in terms of shaving the start of the windows of course these ideas apply to the end as well Let the window for operation k be uk v k and x wk such that uk wk v k Consider the problem where uv v is replaced by uk w and all other windows are unchanged If we can show that there is no feasible schedule for the modied data then if there is a feasible schedule for the original instance k must start in the interval wk v k This approach relies on the fact that we are able to show quickly that no fea sible schedule exists in which the given operation is constrained to start within a small window The algorithms described in x are ideally suited for this In par ticular in our procedure CP Shave w e use Iterated CarlierPinson to determine the maximum value wk such that uk w k is infeasible Once again updates for one operations window m a y lead to further updates for other operations Hence we repeatedly apply this until no further updates are possible The algorithm CP Shave can also be viewed as an algorithm to verify that there exists a schedule consistent with a particular window and hence we c a n apply this idea recursively H o wever if we apply more than one level of recursion then the algorithm becomes impractical so we consider only Double Shave w h ich simply calls CP Shave kk Implementation issues We wish to nd the maximum amount t o s h a ve o a given window Since frequently we cannot shave o a n ything we initially attempt to shave just one unit and repeatedly double this trial length until the shaving procedure fails We then perform a normal bisection search to nd the optimal amount to sh a ve We can specify exact starting times for some operations while ensuring that feasibility is maintained For example if k O i is the rst operation of a job and for each other l O i ul uk pk then we can schedule k to start at uk We preprocess the instance to nd all of these operations Each t i m e w e attempt to shave a portion of a window w e compute a bound for an instance that diers only slightly from the original one If we h a ve already computed a bound for the original instance it may be possible to exploit this fact An advantage of our new On algorithm implementation t o c o m p u t e iterated CarlierPinson updates is that after updating one head we can compute the implied updates for all of the tails in essentially linear time Computational results For each algorithm we report a lower bound T and two running times the rst is the time to show t h a t T is infeasible whereas the second also includes the bisection search f o r T F or CP Shave w e use Iter ated CarlierPinson as our initial lower bound and an upper bound provided by Applegate Cooks implementation of the shifting bottleneck procedure For Double Shave w e use CP Shave as our initial lower bound and the best known upper bound primarily due to Balas Vazacopoulos CP Shave provides a signicant i m p r o vement o ver Iterated Carlier Pinson The computation time for CP Shave ranged from seconds for the fastest problem to seconds for the slowest the more di cult problems are generally in the o n e to two minute range Double Shave is an excellent bound in all but three of the instances it is tight The three problems where Double Shave is not tight are currently unsolved How ever we report better bounds in x using a branchandbound procedure Double Shave is quite slow running times range from minutes for some of the easier problems to several days for the hardest In addition to nding tight bounds for some problems Double Shave was able to reduce the processing windows to such an extent that it is straightforward to construct a schedule It is worth noting the sensitivity to scaling of the various algorithms The running time of the packing algorithm is proportional to T whereas the running time of iterated CarlierPinson is independent o f T The running times of the various Shave algorithms are all proportional to log T since they involve bisection searches to nd the amount t o s h a ve TimeOriented Branching Almost all branchandbound algorithms for the job shop scheduling problem have been based on the disjunctive graph formulation We g i v e t wo branching rules that more directly exploit our processing windows formulation Branching on the next available operation We rst consider a simple strat egy focus on the rst available operation that has not yet been assigned a start ing time this operation either starts immediately or it is delayed This gives us two branches If an operation is delayed we k n o w t h a t i t m ust start at the completion time of some operation on the same machine since we can restrict our attention to active s c hedules in this case we can increase the start of the processing window to the earliest time that an unscheduled operation on that machine can nish While the above branching scheme is technically correct it is not as e cient as possible Since we do not restrict our search to active s c hedules we can nd the same schedule in both branches When we update the head of an operation we do not require that it immediately follow another operation on the same machine Hence it is possible that we delay an operation k O i but still do not schedule another operation on machine i before we s c hedule k T h e schedule obtained might also fail to be active in that there might be idle time immediately prior to some operation that could be used to process another operation which is currently being processed later To solve the rst problem when we delay an operation k O i we m ark it delayed and then require that it be scheduled to start at the completion of some other operation l O i T o ensure this we further delay k until it is true The second problem requires a bit more care but we can nonetheless modify this approach to ensure that we generate only active s c hedules Branching on tight set and nearly tight sets A tight set is a set S O i such that rSPSqS T An tight set is aset S O i such that rSPSqS T A tight set S O i issignicant because we know that m achine i must process S without interruption from rS u n til T qS Thus we rst attempt to compute the order in which the operations of S are processed To do this we start at rS and branch depending on which operation starts at time rS Unfortunately for many instances tight sets are hard to nd and so we relax the concept and consider tight sets For any tight set S w e know that some operation starts in the interval rS rS If is su ciently small each p e r m utation of operations of the tight set will result in a set of disjoint processing windows Thus a natural extension of the branchandbound algorithm for tight sets can be used for tight s e t s We branch depending onwhich operation in S is processed rst If operation k is rst we canreducethesize of its window to b e a t m o st andupdate the window size of the other operations to re"ect the fact that they cannot start until k is completed This creates an tight set for the remainder of the operations An interesting question is Are the branches mutually exclusive The answer to that is yes  provided is not too large relative to the length of the short operations of the tight set Lemma If S is an tight set and k and l are the two shortest operations of Oi then the branches are mutually exclusive if p k pl Computational Results The results of our branchandbound algorithms can be found in Table when we did not obtain a result we report NR in the table Our rst implementation ran Double Shave to reduce the windows and then branched on the next available operation to obtain a schedule We used iterated CarlierPinson as the lower bounding technique for the branchandbound This approach w as successful in solving of the problems We separate the time to nd the optimal schedule into two parts the time to reduce the windows using Double Shave and the time to nd the schedule using branchandbound In all cases where we found a schedule the branchandbound portion took less than a second while the windows reduction phase usually took several thousand seconds This approach w as ineective for instances where the critical path contained a long chain of operations on the same machine since in this case Double Shave performed badly Next we did not perform the windows reduction phase but instead used the stronger lower bound CP Shave Using this technique we w ere able to solve fewer instances But the amount of time required to solve some of the instances was greatly reduced This technique seems to be appropriate only for relatively easy instances We also implemented tight set branching using CP Shave as our lower bounding technique We b r a n c hed on the machine i with the highest onemachine bound and chose S to be the tightest subset of Oi subject to jSj n We indicate the running time to nd the optimal schedule given the optimal T the time to prove that the schedule is optimal and then the time to nd the optimal schedule by starting T at the CP Shave bound and increasing it by one until we nd an optimal schedule Using this technique we are able to solve of the sixteen instances and obtain the best lower bounds known for three others We w ere able to solve one other instance LA  when we used iterated CarlierPinson as our lower bound The one instance that this technique has so far shown itself ineective in solving is LA this might be due to the large gap be t ween the onemachine bound and the optimal schedule length consequently it is very di cult to nd nearly tight s e t s There seem to be at least two v ery dierent classes of di cult job shop scheduling instances In the rst class the onemachine bound is tight o r n e a r l y tight but it is still very di cult to nd an optimal schedule LA  is an example of this class here the onemachine bound is tight but this problem was just recently solved when a new local search t e c hnique was nally able to reduce the upper bound to the onemachine bound In the second class the onemachine bound is very poor and it seems to be di cult to obtain good lower bounds LA is an example of this sort in this case the upper bound was established before the lower bound It seems likely that techniques that work for one type of problem might n o t work well for the other The tight set branching seems to work we l l o n t h e rst type of problems LA is a problem in this class It seems to be di cult for local search techniques only the guided local search t e c hniques of Balas and Vazacopoulos are able to nd an optimal schedule but using tight set branch ing we are able to solve this problem in under two m i n utes As we discussed above this technique does not work as well on LA an instance at the other extreme Overall our timeoriented branchandbound algorithms gave good results for the instances that we tested We w ere able to solve to optimality four standard be n c hmark problems which w ere unsolved when we began our e ort including ABZ which to our knowledge we w ere the rst to solve For the other three unsolved problems that we examined we w ere able to improve upon the best known lower bounds In all cases the performance of our algorithm is at least competitive with disjunctive graph based approaches and in many cases our algorithm performs signicantly better It seems likely that further progress can be made by c o m bing the purely combinatorial approaches with our timeoriented approach From our insight i n to the dual nature of hard instances we believe t h a t i t i s l i k ely such a m ultiple pronged attack might be most eective References J Adams E Balas and D Zawack The shifting bottleneck procedure for job shop scheduling Management Sci D Applegate and W Cook A computational study of the job shop scheduling problem ORSA J Comput E Balas On the facial structure of scheduling polyhedra Math Programming Stud E Balas and A Vazacopoulos Guided L ocal Search with Shifting Bottleneck for Job Shop Scheduling Management Science Research Report MSRR  Grad uate School of Industrial Administration Carnegie Mellon University Pittsburgh PA P Bratley M Florian and P Robillard On sequencing with earliest starts and due dates with application to computing bounds for the nmGFmax problem Naval Res Logist Quart P Brucker B Jurisch and B Sievers A branch and bound algorithm for the job shop scheduling problem Discrete Appl Math J Carlier and E Pinson An algorithm for solving the job shop problem Man agement Sci J Carlier and E Pinson A practical use of Jacksons preemptive s c hedule for solving the job shop problem Ann Oper Res J Carlier and E Pinson Adjustments of heads and tails for the job shop problem European J Oper Res H Fisher and GL Thompson Probabilistic learning combinations of local job shop scheduling rules In JF Muth and GL Thompson editors Industrial Scheduling pages Prentice Hall Englewood Clis NJ M L Fisher BJ Lageweg J K Lenstra and AHG Rinnooy Kan Surrogate duality relaxation for job shop scheduling Discrete Appl Math M R Garey D S Johnson and R Sethi The complexity o f o wshop and jobshop scheduling Math Oper Res RL Graham E L Lawler J K Lenstra and A HG Rinnooy Kan Optimization and approximation in deterministic sequencing and scheduling Ann Discrete Math JR Jackson An extension of Johnsons results on job lot scheduling Naval Res Logist Quart S Lawrence Resource Constrained P r oject Scheduling an Experimental Inves tigation of Heuristic Scheduling T echniques Supplement Graduate School of Industrial Administration Carnegie Mellon University Pittsburgh PA E Nowicki and C Smutnicki A fast taboo search algoritm for the job shop prob lem Management Sci To appear S A Plotkin D B Shmoys and E Tardos Fast approximation algorithms for fractional packing and covering problems Math Oper Res D B Shmoys Solving scheduling problems via linear programming Talk at ORSA TIMS Boston MA May JP Sousa and LA Wolsey A time indexed formulation of non preemptive single machine scheduling problems Math Programming C Stein Approximation Algorithms for Multicommodity Flow and Shop Schedul ing Problems PhD thesis MITLCSTR   Laboratory for Computer Science MIT Cambridge MA RJM Vaessens EHL Aarts and J K Lenstra Job shop scheduling by local search Math Programming B T o appear JM Van den Akker LPbased solution methods for singlemachine schedul ing problems PhD thesis Eindhoven University o f T echnology Eindhoven The Netherlands This article was processed using the LATEX macro package with LLNCS style