OS Presentation1

Embed Size (px)

Citation preview

  • 8/10/2019 OS Presentation1

    1/37

  • 8/10/2019 OS Presentation1

    2/37

    OS is manager which manages

    More than 1user working simultnsly then this resourse is shared b/w them A single memory block(consist of memory chips) is accessed by difrnt

    processes simutanesly is managed Secondary storage not directly acces by cpu. Main memory access cpu

    differently than Secondary storage. Cpu when need to acces Sec strge it cto device driver which search for the required data in Sec Strge. So

    manageing access of Sec strge Sc strge is a kind of I/o Device Modern OS manage Setisfactorily for single isolated system(having all

    these) as well as N/w of system to access How to manage resourses in distributed env

  • 8/10/2019 OS Presentation1

    3/37

  • 8/10/2019 OS Presentation1

    4/37

  • 8/10/2019 OS Presentation1

    5/37

  • 8/10/2019 OS Presentation1

    6/37

  • 8/10/2019 OS Presentation1

    7/37

    Process is a execution of a program Any program(process)has two type ofburst Cpu burst and io burst During Cpu burst program run on cpu and during io burst program wait for required io da Start and terminated with cpu burst. During io burst of 1 job(active to wait- -ng state) cpu burst of Another is performed so waiting to direct active state cant move. Move to Ready state. Processes which require i/o operation during Processing They wait in waiting state . After Completn they reenter in ready state. Bcoz some other may work in active state

  • 8/10/2019 OS Presentation1

    8/37

    Job of cpu scheduler is Select job from Ready to active It tell which next job is to given on next cpu burst

  • 8/10/2019 OS Presentation1

    9/37

    Sjf is special case of priority scheduling hvin priority as rciprcl of wait tym In preemptive on each new job arrival on ready queue remaining time of job in cpu burst is check if

    this is less then as a interrupt that new job is executed first Srt is a sjf having preemptive nature Regular intrvl of tym priority increase by AG bcoz If some job has very less priory and with time new jobs comes have always greater priority then That previous job will never complete Due to preemptive directly process can move from active to ready state

  • 8/10/2019 OS Presentation1

    10/37

    Long term schedulal do the job that in main memory(ready queue) jobshould b not io bound or cpu bound(A proper mixture )

    Short term schduler scedule which cpu burst job from main memory is tosend on cpu

  • 8/10/2019 OS Presentation1

    11/37

  • 8/10/2019 OS Presentation1

    12/37

  • 8/10/2019 OS Presentation1

    13/37

    In Cobegin coend struct no of process written that muh process executesimultaneously.Here in Cobegin coend structure parent process doesnot destroy but it till its different child procecces end. But in fork join struct only those proservive which make count 0 means at the last reach to level 3

  • 8/10/2019 OS Presentation1

    14/37

    Ex..Computer-producer and printer consumerBuffer are two typeIf prodctn rate>consmptn rate In unbonded buffer then with time item increase coIn bounded buffer produser need to wait till consumer consumeWhen prodctn rate

  • 8/10/2019 OS Presentation1

    15/37

  • 8/10/2019 OS Presentation1

    16/37

    Modified due to n elements cant fill

  • 8/10/2019 OS Presentation1

    17/37

  • 8/10/2019 OS Presentation1

    18/37

    Memory Management Only core part of OS is loaded in main memory which is frequently

    use If process/user try to access address

  • 8/10/2019 OS Presentation1

    19/37

  • 8/10/2019 OS Presentation1

    20/37

    Increasing no. of partion increase complexity but utilization of CPUeffectivelyexta care to know base address and limit requireCPU always genenrates logical address starting with 0.

  • 8/10/2019 OS Presentation1

    21/37

    Fragmentaton: internal and external First fit algo and bestfit algo maintand by partition allocation table

  • 8/10/2019 OS Presentation1

    22/37

    Job is broken in Pages , of size same as frame size of memory

  • 8/10/2019 OS Presentation1

    23/37

    Here problem is reduced by compameans job is positioned in frames ncontinuous manner

  • 8/10/2019 OS Presentation1

    24/37

    Here two different jobs aresharing some part of their

    job(three pages) with eachotso both job should not allow

    change the common partIt may be possible that allowpages for a job is not fully fiso valid and invalid flag sayparticulularpage is valid for storing in mmemory or not

  • 8/10/2019 OS Presentation1

    25/37

    CPU generate segment no.(like 1,2,3..) and logical address dwhich is to converted in physical addessSharing of segement in diff process is also possible but difficult

    Here the adv of paging that diffrnt frames(page) can be put at

  • 8/10/2019 OS Presentation1

    26/37

    Here the adv of paging that diffrnt frames(page) can be put atdifferent physical addrs for a particular segment is possible.For procedural programming where diffrnt functions are used canbe treated as different segments.

  • 8/10/2019 OS Presentation1

    27/37

  • 8/10/2019 OS Presentation1

    28/37

    Here if we are replacing a page map table then previous pagewhich we are going to replace maybe modified (from what is itscopy in sec storage) so before overwriting on this page we savethis modified page segment in secondary storage.

  • 8/10/2019 OS Presentation1

    29/37

  • 8/10/2019 OS Presentation1

    30/37

    Hit ratio and misused for performmesumenthere due to futurarrival is unknow

    difficult to use

  • 8/10/2019 OS Presentation1

    31/37

  • 8/10/2019 OS Presentation1

    32/37

    N/2 is compromisation b/w required N frames for a job and faultsin fetching

  • 8/10/2019 OS Presentation1

    33/37

    Cache MemoryThis is three level hierarchyPage map table and segmented memory table is stored in cachememory so that fetching table not take time

    We assume address is 16 bit and data is 8 bit length

  • 8/10/2019 OS Presentation1

    34/37

    We assume address is 16 bit and data is 8 bit length Address is divided in two parts so each block has 3 bits address(word in block)

    means it can identify 8 different location. There are 2^13=8192 blocks are possible in main memory from which256

    blocks occupy in cache also.

    Due to each location has 8bit length(data bus length) so each block size =8byte(8*8=64 bits). Once cpu generate logical adrs then it is converted to physical(using table) now

    first 13 bits of this adrs is stored in reg of cache now this reg is match with eachtag of cache if matched then hit unless miss

    After hit we find the particular block which is present in cache

    In case of miss we call from main memory and save in cache and modify the tagof cache using one of the previous technique(fifo,optimal,lru) Here we made assume that if block is matched then surely that address is

    present means each block has its all 8 location in cache. We identify this particular address from block using rest three bit (word in

    block)

    Here 13 bit block identification address is further divide in group which sizh i (256 f 8 bit l g g id tifi ti )

  • 8/10/2019 OS Presentation1

    35/37

    =cache memory space is (256, so we form 8 bit long group identification)Using decoder first particular group from 256 groups is identified. nowthen tag value at that corresponding tag location is matched withgenerated tag (physical address)if matching found then hit unless miss. This is fastest because we neednot to match with 256(cache size) blocks but directly group is evoked andsingle comparison is done

  • 8/10/2019 OS Presentation1

    36/37

    In set associative both property is merged as,faster property ofdirect as well positioning particular block at no. of places in cachememory

  • 8/10/2019 OS Presentation1

    37/37