Friday, September 15, 2023

BDA_MODULE1_PPTs

 

BDA_MODULE1_PPTs


Follow below link

https://docs.google.com/presentation/d/1VAiyi8CwqhcxJgNi10zqTm2ytv3Sz9YV/edit?usp=sharing&ouid=117785189492851473372&rtpof=true&sd=true


Big Data Analytics (18CS72) MODULE-1

 


Big Data Analytics (18CS72)    MODULE-1 

Follow Below Link

https://docs.google.com/document/d/1QKGVs9ketbOBGwQKLLEyi4PNcbUee07O/edit?rtpof=true


Sunday, April 2, 2023

COMPUTER GRAPHICS AND VISUALIZATION (18CS62)

 

CLICK ON THE BELOW LINK TO GET FULL NOTES, ASSIGNMENT QUESTIONS,PPTs AND MODEL QUESTION PAPRES OF CG&V


https://drive.google.com/drive/folders/1Uz_D3a0ZkVGfsvbNN6Jrm1qkg4m7j00o

Saturday, October 23, 2021

Application Development Using Python (18CS55) MODULE1 QUESTION BANK

 

Application Development Using Python (18CS55)

Module 1

Question Bank:

Application Development Using Python (18CS55)

Module 1

Question Bank:

1) Find the out put of following expressions

1) (5 - 1) * ((7 + 1) /(3 – 1))

2)'Alice'*5

3) -9 % 5

4)11//3

5)2 ** 8

6) spam * 4

7)15/2.0

8)'Alice'+42

 

2) Develop a Python program to find the average of best two marks out of three marks taken as input. Use Exceptional Handling if the input is not a number.

3) What is the difference between range(10),range(0,10),and range(0, 10,1) in a for loop?

4) Explain the rules of precedence used by python to evaluate an expression?

5) Write a short program that prints the numbers 1 to 10 using a for loop by  using all the above ranges.

6) WAP to find best of two test averages form the three tests.

7) List the rules to declare variables? Explain 3 ways of declaring variables with an example?     

8) Distinguish between break and continue statements with an example.

9) Demonstrate with an example Print(), Input() and string replication.

10) Explain elif , for, while, beak and continue statements in python with an example for each.

11) Write a python program to check whether a given no. is even or odd.

12) How can we pass parameters in user defined functions explain with an example.

13) Explain local and global scope with local and global variables.

14) Demonstrate the concept of exception.  Implement  a code which prompts the user for Celsius temp , convert the temp to farenheat and print out the converted temp by handling exceptions.

15) Explain the types of errors with examples.

16) Write a python program using try and except ,so that your program handles non numeric input gracefully by printing the message and existing the program the following shown to execution of the program

Enter hours:20

Enter Rate:9

Error: please enter numeric input

Enter hours: forty

Error, please enter numeric input

17) Explain conditional execution ,alternative execution chained conditionals and nested conditional with examples.

18) Explain break and continue statement with examples in python.

19) Write  a program with a function computer grade that takes score as its parameter and returns a grade as a string.

20) Write a python program to find greatest of three numbers (without using and operator) by getting three numbers through keyboard using functions.

21) WAP that uses input to prompt a user for their name and than welcomes them.

22) Explain str(),int(),float() functions with example.?

23) Explain elif with an example program.?

24) WAP that ask for a user name and password using continue statement.?

25) Discuss the starting, stopping and stepping arguments to range() in python.

26) Discuss Importing modules in detail.

27) Explain def,none and return values and return statements in python.

28) Discuss local and global scope .

29)WAP to guess the number between 1 and 20 in python.


APPLICATION DEVELOPMENT USING PYTHON (18CS55) MODULE1 NOTES

Click on the below link



https://docs.google.com/document/d/10vkWpG8eGaFIyu5rIau0LI9RfBRyF4Wn/edit?usp=sharing&ouid=117785189492851473372&rtpof=true&sd=true




Thursday, July 30, 2015

Synchronization


Thread Creation, Manipulation and Synchronization



  • We first must postulate a thread creation and manipulation interface. Will use the one in Nachos:
    class Thread {
      public:
        Thread(char* debugName); 
        ~Thread();
        void Fork(void (*func)(int), int arg);
        void Yield();
        void Finish();
    }
    
  • The Thread constructor creates a new thread. It allocates a data structure with space for the TCB.
  • To actually start the thread running, must tell it what function to start running when it runs. The Fork method gives it the function and a parameter to the function.
  • What does Fork do? It first allocates a stack for the thread. It then sets up the TCB so that when the thread starts running, it will invoke the function and pass it the correct parameter. It then puts the thread on a run queue someplace. Fork then returns, and the thread that called Fork continues.
  • How does OS set up TCB so that the thread starts running at the function? First, it sets the stack pointer in the TCB to the stack. Then, it sets the PC in the TCB to be the first instruction in the function. Then, it sets the register in the TCB holding the first parameter to the parameter. When the thread system restores the state from the TCB, the function will magically start to run.
  • The system maintains a queue of runnable threads. Whenever a processor becomes idle, the thread scheduler grabs a thread off of the run queue and runs the thread.
  • Conceptually, threads execute concurrently. This is the best way to reason about the behavior of threads. But in practice, the OS only has a finite number of processors, and it can't run all of the runnable threads at once. So, must multiplex the runnable threads on the finite number of processors.
  • Let's do a few thread examples. First example: two threads that increment a variable.
    int a = 0;
    void sum(int p) { 
      a++;
      printf("%d : a = %d\n", p, a);
    }
    void main() {
      Thread *t = new Thread("child");
      t->Fork(sum, 1);
      sum(0);
    }
    
  • The two calls to sum run concurrently. What are the possible results of the program? To understand this fully, we must break the sum subroutine up into its primitive components.
  • sum first reads the value of a into a register. It then increments the register, then stores the contents of the register back into a. It then reads the values of of the control string, p and a into the registers that it uses to pass arguments to the printf routine. It then calls printf, which prints out the data.
  • The best way to understand the instruction sequence is to look at the generated assembly language (cleaned up just a bit). You can have the compiler generate assembly code instead of object code by giving it the -S flag. It will put the generated assembly in the same file name as the .c or .cc file, but with a .s suffix.
            la      a, %r0
            ld      [%r0],%r1
            add     %r1,1,%r1
            st      %r1,[%r0]
    
            ld      [%r0], %o3 ! parameters are passed starting with %o0
            mov     %o0, %o1
            la      .L17, %o0
            call    printf
    
  • So when execute concurrently, the result depends on how the instructions interleave. What are possible results?
    0 : 1                                      0 : 1
    1 : 2                                      1 : 1
    
    1 : 2                                      1 : 1
    0 : 1                                      0 : 1
    
    1 : 1                                      0 : 2
    0 : 2                                      1 : 2
     
    0 : 2                                      1 : 2
    1 : 1                                      0 : 2
    
    So the results are nondeterministic - you may get different results when you run the program more than once. So, it can be very difficult to reproduce bugs. Nondeterministic execution is one of the things that makes writing parallel programs much more difficult than writing serial programs.
  • Chances are, the programmer is not happy with all of the possible results listed above. Probably wanted the value of a to be 2 after both threads finish. To achieve this, must make the increment operation atomic. That is, must prevent the interleaving of the instructions in a way that would interfere with the additions.
  • Concept of atomic operation. An atomic operation is one that executes without any interference from other operations - in other words, it executes as one unit. Typically build complex atomic operations up out of sequences of primitive operations. In our case the primitive operations are the individual machine instructions.
  • More formally, if several atomic operations execute, the final result is guaranteed to be the same as if the operations executed in some serial order.
  • In our case above, build an increment operation up out of loads, stores and add machine instructions. Want the increment operation to be atomic.
  • Use synchronization operations to make code sequences atomic. First synchronization abstraction: semaphores. A semaphore is, conceptually, a counter that supports two atomic operations, P and V. Here is the Semaphore interface from Nachos:
    class Semaphore {
      public:
        Semaphore(char* debugName, int initialValue);       
        ~Semaphore();                                      
        void P();
        void V();
    }
    
  • Here is what the operations do:
    • Semphore(name, count) : creates a semaphore and initializes the counter to count.
    • P() : Atomically waits until the counter is greater than 0, then decrements the counter and returns.
    • V() : Atomically increments the counter.
  • Here is how we can use the semaphore to make the sum example work:
    int a = 0;
    Semaphore *s;
    void sum(int p) {
      int t;
      s->P();
      a++;
      t = a;
      s->V();
      printf("%d : a = %d\n", p, t);
    }
    void main() {
      Thread *t = new Thread("child");
      s = new Semaphore("s", 1);
      t->Fork(sum, 1);
      sum(0);
    }
    
  • We are using semaphores here to implement a mutual exclusion mechanism. The idea behind mutual exclusion is that only one thread at a time should be allowed to do something. In this case, only one thread should access a. Use mutual exclusion to make operations atomic. The code that performs the atomic operation is called a critical section.
  • Semaphores do much more than mutual exclusion. They can also be used to synchronize producer/consumer programs. The idea is that the producer is generating data and the consumer is consuming data. So a Unix pipe has a producer and a consumer. You can also think of a person typing at a keyboard as a producer and the shell program reading the characters as a consumer.
  • Here is the synchronization problem: make sure that the consumer does not get ahead of the producer. But, we would like the producer to be able to produce without waiting for the consumer to consume. Can use semaphores to do this. Here is how it works:
    Semaphore *s;
    void consumer(int dummy) {
      while (1) { 
        s->P();
        consume the next unit of data
      }
    }
    void producer(int dummy) {
      while (1) {
        produce the next unit of data
        s->V();
      }
    }
    void main() {
      s = new Semaphore("s", 0);
      Thread *t = new Thread("consumer");
      t->Fork(consumer, 1);
      t = new Thread("producer");
      t->Fork(producer, 1);
    }
    
    In some sense the semaphore is an abstraction of the collection of data.

Processes and Threads

Processes and Threads



  • A process is an execution stream in the context of a particular process state.
    • An execution stream is a sequence of instructions.
    • Process state determines the effect of the instructions. It usually includes (but is not restricted to):
      • Registers
      • Stack
      • Memory (global variables and dynamically allocated memory)
      • Open file tables
      • Signal management information
      Key concept: processes are separated: no process can directly affect the state of another process.
  • Process is a key OS abstraction that users see - the environment you interact with when you use a computer is built up out of processes.
    • The shell you type stuff into is a process.
    • When you execute a program you have just compiled, the OS generates a process to run the program.
    • Your WWW browser is a process.
  • Organizing system activities around processes has proved to be a useful way of separating out different activities into coherent units.
  • Two concepts: uniprogramming and multiprogramming.
    • Uniprogramming: only one process at a time. Typical example: DOS. Problem: users often wish to perform more than one activity at a time (load a remote file while editing a program, for example), and uniprogramming does not allow this. So DOS and other uniprogrammed systems put in things like memory-resident programs that invoked asynchronously, but still have separation problems. One key problem with DOS is that there is no memory protection - one program may write the memory of another program, causing weird bugs.
    • Multiprogramming: multiple processes at a time. Typical of Unix plus all currently envisioned new operating systems. Allows system to separate out activities cleanly.
  • Multiprogramming introduces the resource sharing problem - which processes get to use the physical resources of the machine when? One crucial resource: CPU. Standard solution is to use preemptive multitasking - OS runs one process for a while, then takes the CPU away from that process and lets another process run. Must save and restore process state. Key issue: fairness. Must ensure that all processes get their fair share of the CPU.
  • How does the OS implement the process abstraction? Uses a context switch to switch from running one process to running another process.
  • How does machine implement context switch? A processor has a limited amount of physical resources. For example, it has only one register set. But every process on the machine has its own set of registers. Solution: save and restore hardware state on a context switch. Save the state in Process Control Block (PCB). What is in PCB? Depends on the hardware.
    • Registers - almost all machines save registers in PCB.
    • Processor Status Word.
    • What about memory? Most machines allow memory from multiple processes to coexist in the physical memory of the machine. Some may require Memory Management Unit (MMU) changes on a context switch. But, some early personal computers switched all of process's memory out to disk (!!!).
  • Operating Systems are fundamentally event-driven systems - they wait for an event to happen, respond appropriately to the event, then wait for the next event. Examples:
    • User hits a key. The keystroke is echoed on the screen.
    • A user program issues a system call to read a file. The operating system figures out which disk blocks to bring in, and generates a request to the disk controller to read the disk blocks into memory.
    • The disk controller finishes reading in the disk block and generates and interrupt. The OS moves the read data into the user program and restarts the user program.
    • A Mosaic or Netscape user asks for a URL to be retrieved. This eventually generates requests to the OS to send request packets out over the network to a remote WWW server. The OS sends the packets.
    • The response packets come back from the WWW server, interrupting the processor. The OS figures out which process should get the packets, then routes the packets to that process.
    • Time-slice timer goes off. The OS must save the state of the current process, choose another process to run, the give the CPU to that process.
  • When build an event-driven system with several distinct serial activities, threads are a key structuring mechanism of the OS.
  • A thread is again an execution stream in the context of a thread state. Key difference between processes and threads is that multiple threads share parts of their state. Typically, allow multiple threads to read and write same memory. (Recall that no processes could directly access memory of another process). But, each thread still has its own registers. Also has its own stack, but other threads can read and write the stack memory.
  • What is in a thread control block? Typically just registers. Don't need to do anything to the MMU when switch threads, because all threads can access same memory.
  • Typically, an OS will have a separate thread for each distinct activity. In particular, the OS will have a separate thread for each process, and that thread will perform OS activities on behalf of the process. In this case we say that each user process is backed by a kernel thread.
    • When process issues a system call to read a file, the process's thread will take over, figure out which disk accesses to generate, and issue the low level instructions required to start the transfer. It then suspends until the disk finishes reading in the data.
    • When process starts up a remote TCP connection, its thread handles the low-level details of sending out network packets.
  • Having a separate thread for each activity allows the programmer to program the actions associated with that activity as a single serial stream of actions and events. Programmer does not have to deal with the complexity of interleaving multiple activities on the same thread.
  • Why allow threads to access same memory? Because inside OS, threads must coordinate their activities very closely.
    • If two processes issue read file system calls at close to the same time, must make sure that the OS serializes the disk requests appropriately.
    • When one process allocates memory, its thread must find some free memory and give it to the process. Must ensure that multiple threads allocate disjoint pieces of memory.
    Having threads share the same address space makes it much easier to coordinate activities - can build data structures that represent system state and have threads read and write data structures to figure out what to do when they need to process a request.
  • One complication that threads must deal with: asynchrony. Asynchronous events happen arbitrarily as the thread is executing, and may interfere with the thread's activities unless the programmer does something to limit the asynchrony. Examples:
    • An interrupt occurs, transferring control away from one thread to an interrupt handler.
    • A time-slice switch occurs, transferring control from one thread to another.
    • Two threads running on different processors read and write the same memory.
  • Asynchronous events, if not properly controlled, can lead to incorrect behavior. Examples:
    • Two threads need to issue disk requests. First thread starts to program disk controller (assume it is memory-mapped, and must issue multiple writes to specify a disk operation). In the meantime, the second thread runs on a different processor and also issues the memory-mapped writes to program the disk controller. The disk controller gets horribly confused and reads the wrong disk block.
    • Two threads need to write to the display. The first thread starts to build its request, but before it finishes a time-slice switch occurs and the second thread starts its request. The combination of the two threads issues a forbidden request sequence, and smoke starts pouring out of the display.
    • For accounting reasons the operating system keeps track of how much time is spent in each user program. It also keeps a running sum of the total amount of time spent in all user programs. Two threads increment their local counters for their processes, then concurrently increment the global counter. Their increments interfere, and the recorded total time spent in all user processes is less than the sum of the local times.
  • So, programmers need to coordinate the activities of the multiple threads so that these bad things don't happen. Key mechanism: synchronization operations. These operations allow threads to control the timing of their events relative to events in other threads. Appropriate use allows programmers to avoid problems like the ones outlined above.