Institute of Technology & Management UniverseDhanora Tank Road, Paldi Village, Halol Highway, Near Jarod, Vadodara Dist, Gujarat, INDIA-391510 (Approved by AICTE, New Delhi, and Affiliated to Gujarat Technological University, Ahmadabad) FACULTY LABORATORY MANUAL For V Semester CSE Session: 2017-2018 Analysis and Design of Algorithm Subject Code: 2150703 Prepared by: Anuj Kumar Jain, Assistant Professor Department of Computer Science & Engineering June -2017 Subject Teacher: Anuj Kumar Jain, Assistant Professor, Dept. CSE, ITM Universe Vadodara Institute of Technology & Management Universe Dhanora Tank Road, Paldi Village, Halol Highway, Near Jarod, Vadodara Dist, Gujarat, INDIA-391510 (Approved by AICTE, New Delhi, and Affiliated to Gujarat Technological University, Ahmadabad) STUDENT CERTIFICATE This is to certify Mr.\ Miss ................................................................ student of Computer Engineering GTU Enrollment No. ............................................. has satisfactorily completed his/her term-work ................................... in Analysis and Design of Algorithm for term ending in.........................2017. Date: Place: Signature of Faculty Signature of Faculty (..............................................) (..............................................) Date Date Subject Teacher: Anuj Kumar Jain, Assistant Professor, Dept. CSE, ITM Universe Vadodara 1. Vision & Mission of ITMU Vision To develop the institute into a centre of excellence in education, research, training and consultancy to the extent that it becomes a significant player in the technical and overall development of the country. Mission To meet the global need of competent and dedicated professionals. To undertake research & development, consultancy & extension activities which are relevant to the needs of mankind. To serve the community by interaction on technical, scientific and other aspects of development. Values Humanity and ethics blended with sincerity, integrity and accountability. Productive delivery supported by healthy competition. Efficiency and dynamism coupled with sensitivity. To nurture innovation and ability to think differently with rational creativity. Appreciation of sustainable socio- cultural values and to feel proud to be a good professional contributing to the betterment of the Mankind and Mother Earth. Subject Teacher: Anuj Kumar Jain, Assistant Professor, Dept. CSE, ITM Universe Vadodara rounded and motivated Engineers in the field of Computer Science & Engineering. ITM Universe Vadodara . Vision & Mission of CSE Department Vision To develop the Computer Science & Engineering Department into an excellent center of learning. Dept. CSE. Assistant Professor. research. consultancy and training. Mission PO1: To produce professionally brilliant. 2. consultancy and technical interaction with industry Subject Teacher: Anuj Kumar Jain. PO2: To undertake developmental research. Dept. ITM Universe Vadodara . 2) The CSE program will prepare the graduates for adapting themselves with constantly changing technology with the ability of continuous learning. and solve engineering problems. PEO (Programme Educational Objectives) 1) The CSE program will prepare the graduates for Excellent professional and technical careers. Assistant Professor.3. analyze. formulate. Subject Teacher: Anuj Kumar Jain. ethical and social responsibilities. CSE. 3) The CSE program will prepare the graduates to effectively communicate and to understand professional. 5) The CSE program will prepare graduates to gain the ability to identify. 4) The CSE program will prepare graduates to work as a part of team on multidisciplinary projects. manage and secure the complex computer networks. k) Ability to appreciate contemporary issues in Computer Science & Engineering. troubleshoot. e) Ability to deploy. written and various presentation skills using IT tools i) Ability to respond effectively to the global. h) Ability to communicate effectively by oral. science and engineering in Computer Sc. societal and environmental concerns in developing computer engineering solutions. analyze and interpret data. Assistant Professor. c) Graduates will have strong fundamental concepts on core Computer Science & Engineering subjects. f) Ability to work cooperatively. CSE. b) Ability to design and conduct experiments.4. d) Ability to design develops. understand and apply information in the process of lifelong learning. j) Ability to acquire. maintain. PO (Programme Outcome) a) Ability to apply knowledge of mathematics. test and debug the software. analyzes. g) Ability to respond positively to the accepted norms of professional and ethical responsibility. ITM Universe Vadodara . creatively and responsibly in a multi-disciplinary team. & Engineering. Dept. Subject Teacher: Anuj Kumar Jain. SEMESTER: V COMPUTER SCIENCE & ENGINEERING Subject Name: Analysis and Design of Algorithms Subject Code: 2150703 Evaluation Scheme List of Experiments: 1. Implementation of a knapsack problem using greedy algorithm 9. Implementation of Graph and Searching (DFS and BFS). 10. the name of the faculty. 7. Implementation of max-heap sort algorithm 4.E. Implementation and Time analysis of linear and binary search algorithm. Subject Teacher: Anuj Kumar Jain. Implement prim’s algorithm 11. Implement kruskal’s algorithm. Merge sort and Quicksort 2. 3. Implementation of making a change problem using dynamic programming 8. which include videos. CSE. and benefits ACTIVE LEARNING ASSIGNMENTS: Preparation of power-point slides. The best three works should submit to GTU. Implementation and Time analysis of sorting algorithms. From the given string find maximum size possible palindrome sequence 2. Bubble sort. Selection sort. ITM Universe Vadodara . Implementation of a knapsack problem using dynamic programming. Lab Syllabus GUJARAT TECHNOLOGICAL UNIVERSITY B. Implementation of chain matrix multiplication using dynamic programming. Insertion sort. Dept. Design based Problems (DP)/Open Ended Problem: 1. BRTS route design. graphics for better understanding theory and practical work – The faculty will allocate chapters/ parts of chapters to groups of students so that the entire syllabus to be covered.5. animations. Explore the application of Knapsack in human resource selection and courier loading system using dynamic programming and greedy algorithm 3. along with the names of the students of the group. Implement LCS problem. Implementation and Time analysis of factorial program using iterative and recursive method 5. pictures. traffic on road. Department and College on the first slide. 12. Assistant Professor. 6. The power-point slides should be put up on the web-site of the College/ Institute. considering traffic. Write a C program to Implementation of 9-b BFS Write a C program to Implement prim’s 10 algorithm Write a C program to Implement 11 kruskal’s algorithm. Write a C program to Implementation of 6 chain matrix multiplication using dynamic programming. 1 Bubble sort. Write a C program to Implementation and Time analysis of sorting algorithms. ITM Universe Vadodara . Write a C program to Implementation of 3 max-heap sort algorithm Write a C program to Implementation and 4 Time analysis of factorial program using iterative and recursive method Write a C program to Implementation of a 5 knapsack problem using dynamic programming. Assistant Professor. Selection sort. Merge sort and Quicksort Write a C program to Implementation and 2 Time analysis of linear and binary search algorithm. Write a C program to Implement LCS 12 problem. No. List of Experiment S. Insertion sort. Subject Teacher: Anuj Kumar Jain. Dept. CSE.6. Date of Title Page Grade Signature Experiment No. Write a C program to Implementation of 7 making a change problem using dynamic programming Write a C program to Implementation of a 8 knapsack problem using greedy algorithm Write a C program to Implementation of 9-a DFS . Bubble sort. 12 to Implement LCS problem. Merge sort and Quicksort 2 Implementation and Time analysis of linear and binary search algorithm. No List of Experiment Batch A-1 Batch A-2 Batch A-3 Performe Back Log Performed Back Log Performed Back Log d Date Date Date Date Date Date 1 Implementation and Time analysis of sorting algorithms. ITM Universe Vadodara .A S. 10 Implementation of BFS 11 Implement prim’s algorithm 11-1 Implement kruskal’s algorithm. 7 Implementation of making a change problem using dynamic programming 8 Implementation of a knapsack problem using greedy algorithm 9 Implementation of DFS . Dept. CSE. 6 Implementation of chain matrix multiplication using dynamic programming.7. Selection sort. Assistant Professor. Lab Plan Batch . Insertion sort. Subject Teacher: Anuj Kumar Jain. 3 Implementation of max-heap sort algorithm 4 Implementation and Time analysis of factorial program using iterative and recursive method 5 Implementation of a knapsack problem using dynamic programming. 7 Implementation of making a change problem using dynamic programming 8 Implementation of a knapsack problem using greedy algorithm 9 Implementation of DFS . 6 Implementation of chain matrix multiplication using dynamic programming. No List of Experiment Batch B-1 Batch B-2 Batch B-3 Performe Back Log Performed Back Log Performed Back Log d Date Date Date Date Date Date 1 Implementation and Time analysis of sorting algorithms. Insertion sort. CSE. Dept.Batch-B S. 12 to Implement LCS problem. Merge sort and Quicksort 2 Implementation and Time analysis of linear and binary search algorithm. Bubble sort. 3 Implementation of max-heap sort algorithm 4 Implementation and Time analysis of factorial program using iterative and recursive method 5 Implementation of a knapsack problem using dynamic programming. Selection sort. 10 Implementation of BFS 11 Implement prim’s algorithm 11-1 Implement kruskal’s algorithm. Subject Teacher: Anuj Kumar Jain. Assistant Professor. ITM Universe Vadodara . No.Activity 26 150950107028 DOSHI RISHI DEEPAKKUMAR selection problem Minimum Spanning trees 27 150950107029 GANDHI HELLY DEVESH (Kruskal’s algorithm. ALA Topic Enroll. 28 150950107030 GANDHI PARTH VIJAYKUMAR Depth First Search 29 150950107031 GONSALVES AARON DOMINIC Breath First Search GOSWAMI PRIYANK Topological sort 30 150950107032 BRIJESHGIRI 31 150950107034 PATEL HARSH AJAYKUMAR Connected components 32 150950107035 BHUVA HARSH PRAVINBHAI The Eight queens problem . Dept. 4 150950107004 AGRAWAL DRASHTI KAMLESH Loop invariant 5 150950107005 ALFERD ALVES CHRISTO Bubble sort. Name of the Student No. 1 150950107001 VAIDYA ADITYA HIRENKUMAR Basics of Algorithms 2 150950107002 RANADE ADVAIT G Asymptotic Notations 3 150950107003 CHARMI AGRAWAL Amortized analysis Analyzing control statement. CSE. . 24 150950107026 DIALANI DIMPLE JAIRAM Knapsack problem 25 150950107027 DIWAKAR SHUBHAM MANOHAR All Points Shortest path.8 ALA Topics Batch -A Sr. Radix sort 9 150950107010 BARBHAYA BHAVYA MANISH Counting sort 10 150950107011 BHAGAT RAHI KUMAR Recurrence & 11 150950107012 BHANDHARI NAMAS SANJIV Substitution method Recurrence and Iteration 12 150950107014 BHATT VALAY AVINASH methods to solve recurrence 13 150950107015 BRAHMBHATT YASH RAJENDRA Master Method with Proof Recurrence and Recursive tree 15 150950107017 CHOKSI JANKI NIKUNJ methods to solve recurrence divide and conquer algorithm - 16 150950107018 CHOYAL KETAN RAJPAL Binary Search 18 150950107020 DAVE RICHA RAJENDRA Merge Sort 19 150950107021 DESAI ANERI DIVYANG Quick Sort DOLIA DEVANSHI Matrix Multiplication. 20 150950107022 Exponential DHARMENDRASINH Dynamic Programming – 21 150950107023 DHAMDHERE SHALAKA SUDHIR Calculating the Binomial Coefficient 22 150950107024 DHOPTE ABHIRAJ PRATAP Making Change Problem. Assistant Professor. Selection sort 6 150950107007 PRAJAPATI AYUSHI YOGESH Insertion sort. 23 150950107025 SHAH DHRUMIL SHITALBHAI Assembly Line-Scheduling. 33 150950107036 MODI HIMANI BHARATKUMAR Minimax principle 34 150950107037 JADAV PRIYAL PRAVINBHAI Knapsack problem Subject Teacher: Anuj Kumar Jain. Prim’s algorithm). Greedy Algorithm. Shell sort BORAH BAHNIMAN HOREN Heap sort 7 150950107008 CHANDRA 8 150950107009 PATEL BANSARI RAJESHBHAI Bucket sort. Travelling Salesman problem. ITM Universe Vadodara . Polynomial 44 150950107047 PATEL KUNJ RAKESH reduction.ANGHAN Minimax principle 61 160953107002 KEVIN B . FAKHRIBHAI The Knuth-Morris-Pratt 41 150950107044 KHATRI KAMAL RAMKRISHNA algorithm. NP. ITM Universe Vadodara .ANGHAN Knapsack problem 62 160953107006 KUSH M . String Matching with finite 42 150950107045 KOTWAL KHUSHBU DIPAKBHAI automata 43 150950107046 KUKREJA JUGAL GAJENDRAPAL The Rabin-Karp algorithm The class P and NP.Completeness Problem. NP-Hard Problems SHARMA KUSHAGR Approximation algorithm 45 150950107048 AVDESHKUMAR 46 150950107049 LUHAR MEET JAGDIP Hamiltonian problem. Travelling Salesman problem. CSE. Prim’s algorithm).JOSHI Huffman code 66 160953107015 VIDHI D . 60 160953107001 KAVINB. Shell sort Subject Teacher: Anuj Kumar Jain.Activity 53 150950107056 selection problem AMRISHBHAI Minimum Spanning trees 54 150950107057 NAIK HARSHAL ROHITBHAI (Kruskal’s algorithm. DAVE Fractional Knapsack Problem. . Dept. 55 150950107058 SINHA NAMAN SANJEEB Depth First Search 56 150950107059 PATEL NEEL AKASH Breath First Search 57 150950107060 NINAWE RUCHI PRAKASH Topological sort 58 150950107061 TIWARI PALAK KISHOR Connected components 59 150954107001 RAHUL JALINDAR SONEWANE The Eight queens problem . . . JAMBEKAR VISHAKAHA Shortest paths 35 150950107038 UNMESH 36 150950107039 PATEL JHANVI JIGNESHKUMAR Fractional Knapsack Problem. MOKANI PRASHANT Greedy Algorithm. 64 160953107008 AAKURTI DESAI Job Scheduling Problem 65 160953107009 SHRIPAD S.PATEL Insertion sort. 51 150950107054 SHAH MIRAL MEHUL Knapsack problem 52 150950107055 MODI HARSH NAYANKUMAR All Points Shortest path. Assistant Professor. 47 150950107050 PATEL MAITRI NITINBHAI Travelling Salesman problem Dynamic Programming – MEHTA HARIOM 48 150950107051 Calculating the Binomial HIMANSHUBHAI Coefficient 49 150950107052 MEHTA ZANKHIT MUKESH Making Change Problem.DAVE Shortest paths 63 160953107007 RUDRA K. 37 150950107040 JIVANI SIMRAN YUSUFBHAI Job Scheduling Problem 38 150950107041 JOSHI HETA ALKESH Huffman code KAYADAWALA MOIZ The naive string matching 40 150950107043 algorithm. 50 150950107053 PUROHIT MEHUL JITENDRA Assembly Line-Scheduling. 4 150950107067 PARIKH NISARG MRUNAL Loop invariant 5 150950107068 DESAI PARTH RAJESHBHAI Bubble sort. 1 150950107064 PARAM TRUSHA PRASHANT Basics of Algorithms PAREKH VIDHISHA Asymptotic Notations 2 150950107065 UMESHKUMAR 3 150950107066 PARIKH ANUJ HEMANTKUMAR Amortized analysis Analyzing control statement. No. 25 150950107089 JAYDEEPSINH Greedy Algorithm. Selection sort 6 150950107070 PATEL DHARMIK MITESH Insertion sort. Name of the Student No. 20 150950107084 PUROHIT PRADIP SHIVLAL Exponential Dynamic Programming – 21 Calculating the Binomial 150950107085 PATEL RAJ KIRANKUMAR Coefficient 22 150950107086 RASHMI ANILKUMAR Making Change Problem. ALA Topic Enroll. 24 150950107088 PANDYA RUDRA NIKHIL Knapsack problem SANGLOD PARTHIVSINH All Points Shortest path. Travelling Salesman problem. 33 150950107098 SHAH NUPUR DHARMESHBHAI Minimax principle 34 150950107099 SHAH PARTH DIVYESH Knapsack problem Subject Teacher: Anuj Kumar Jain.Batch -B Sr. Dept.Activity 26 150950107091 SHAH AKSHAT VIPUL selection problem Minimum Spanning trees 27 (Kruskal’s algorithm. 23 150950107087 RAVAL VANDAN Assembly Line-Scheduling. Shell sort 7 150950107071 PATEL DHRUV DILIPBHAI Heap sort 8 150950107072 PATEL DHRUV VASANT Bucket sort. Assistant Professor. ITM Universe Vadodara . Prim’s 150950107092 SHAH DHRUV KETANKUMAR algorithm). 28 150950107093 SHAH JENISHA JAYDUTT Depth First Search 29 150950107094 SHAH KALP Breath First Search SHAH KHUSHBOO Topological sort 30 150950107095 CHANDRAKANT 31 150950107096 SHAH NEEL SANJAY Connected components 32 150950107097 SHAH NISARG BHAGYESH The Eight queens problem . CSE. Radix sort 9 150950107073 PATEL HARDI SURESHBHAI Counting sort 10 150950107074 PATEL KARAN MANIOJBHAI Recurrence & 11 150950107076 PATEL NIKET CHIRAG Substitution method Recurrence and Iteration 12 150950107077 PATEL NIRMAL UPENDRABHAI methods to solve recurrence PATEL RITUKUMARI Master Method with Proof 13 150950107078 VIRENDRABHAI Recurrence and Recursive tree 15 150950107079 PATEL SNEH KAMLESHKUMAR methods to solve recurrence PATEL VAIBHAV divide and conquer algorithm - 16 Binary Search 150950107081 MAHENDRABHAI 18 150950107082 PATIL BHUMIKA ARUNRAO Merge Sort 19 150950107083 SATA PRANSHI Quick Sort Matrix Multiplication. . Completeness 150950107109 SHIVNANI PARSHAV VINOD Problem. . 64 160953107013 PARMAR AMITA ARVINDBHAI Job Scheduling Problem 65 160953107014 PARMAR PREMLAL SANTLAL Huffman code Subject Teacher: Anuj Kumar Jain. Prim’s 150950107119 THAKKAR RESHMA MANISH algorithm). NP. String Matching with finite 42 150950107107 SHARMA HARSHDA ANIL automata 43 150950107108 SHARMA SHIVANI NILESHBHAI The Rabin-Karp algorithm The class P and NP. Assistant Professor. . Greedy Algorithm. 50 150950107115 PARESHBHAI 51 150950107116 SENJALIYA TANVI AMBABHAI Knapsack problem 52 150950107117 TAPKIR POOJA All Points Shortest path. 46 150950107111 JASHVANTKUMAR 47 150950107112 SHUKLA HELLY RAJESHKUMAR Travelling Salesman problem Dynamic Programming – 48 Calculating the Binomial 150950107113 SONI RIYA HITESHKUMAR Coefficient 49 150950107114 JADAV STEVE SURESHKKUMAR Making Change Problem. . The Knuth-Morris-Pratt 41 150950107106 SHAHERAWALA HARSH algorithm. ITM Universe Vadodara . Dept. CSE. Polynomial 44 reduction. 37 150950107103 SHAH SAMYAK YOGESH Job Scheduling Problem 38 150950107104 SHAH SHLOK Huffman code The naive string matching 40 150950107105 SHAH VRAJ MANOJ algorithm.35 150950107100 SHAH RIKEN JAINESHBHAI Shortest paths 36 150950107101 SHAH SAGAR SANDIPKUMAR Fractional Knapsack Problem. CHAVDA URVASHIBEN Travelling Salesman problem.Activity 53 150950107118 THAKAR SHREY NILESHKUMAR selection problem Minimum Spanning trees 54 (Kruskal’s algorithm. NP-Hard Problems 45 150950107110 PATEL SHRADDHA YOGESHBHAI Approximation algorithm RATHOD SHRESHTH Hamiltonian problem. 60 Minimax principle 160953107005 JASHWANTBHAI LADANI RIDDHI Knapsack problem 61 160953107010 CHANDRAKANTBHAI 62 160953107011 NIGAM SHEFALI AJAY Shortest paths 63 160953107012 NIKUMBH SAYLI MAHESHBHAI Fractional Knapsack Problem. SUKHADIYA RACHANA Assembly Line-Scheduling. 55 150950107120 TRIVEDI AAYUSHI SNEHAL Depth First Search 56 150950107121 VAMJA DEVVRAT RAJESHBHAI Breath First Search VANJANI SHUBHANG Topological sort 57 150950107122 BHARATKUMAR 58 150950107123 FOJDAR VISHAL SHIVKUMAR Connected components 59 160953107003 BHAVSAR MEET ASHOKKUMAR The Eight queens problem . CSE. Enrollment Name Enrollment Name No Enrollment Name No. Assistant Professor. Sr. 9. No. ITM Universe Vadodara . Sr. Batch Wise student List Batch -B BATCH-B1 Batch B2 Batch B3 Sr. 1 150950107086 Rashmi Anilkumar 1 150950107109 Shivnani Parshav Vinod 1 150950107091 Shah Akshat Vipul 2 150950107078 Patel Ritukumari Virendrabhai 2 150950107120 Trivedi Aayushi Snehal 2 150950107094 Shah Kalp Rathod Shreshth 3 150950107119 Thakkar Reshma Manish 3 150950107111 3 150950107074 Patel Karan Maniojbhai Jashvantkumar 4 150950107114 Jadav Steve Sureshkkumar 4 150950107081 Patel Vaibhav Mahendrabhai 4 150950107093 Shah Jenisha Jaydutt 5 150950107113 Soni Riya Hiteshkumar 5 150950107106 Shaherawala Harsh 5 150950107105 Shah Vraj Manoj 6 150950107073 Patel Hardi Sureshbhai 6 150950107110 Patel Shraddha Yogeshbhai 6 150950107085 Patel Raj Kirankumar Vanjani Shubhang 7 150950107116 Senjaliya Tanvi Ambabhai 7 150950107122 7 150950107123 Fojdar Vishal Shivkumar Bharatkumar 8 150950107108 Sharma Shivani Nileshbhai 8 150950107064 Param Trusha Prashant 8 150950107077 Patel Nirmal Upendrabhai 9 150950107098 Shah Nupur Dharmeshbhai 9 150950107117 Tapkir Pooja 9 150950107097 Shah Nisarg Bhagyesh 10 150950107082 Patil Bhumika Arunrao 10 150950107071 Patel Dhruv Dilipbhai 10 160953107018 Trivedi Rudhra Kamlesh 11 150950107088 Pandya Rudra Nikhil 11 150950107076 Patel Niket Chirag 11 150950107096 Shah Neel Sanjay 12 150950107115 Sukhadiya Rachana Pareshbhai 12 150950107100 Shah Riken Jaineshbhai 12 160953107010 Ladani Riddhi Chandrakantbhai 13 150950107083 Sata Pranshi 13 150950107104 Shah Shlok 13 160953107013 Parmar Amita Arvindbhai Suthar Manankumar 14 150950107070 Patel Dharmik Mitesh 14 160953107017 14 150950107072 Patel Dhruv Vasant Jitendrakumar 15 160953107012 Nikumbh Sayli Maheshbhai 15 150950107068 Desai Parth Rajeshbhai 15 160953107003 Bhavsar Meet Ashokkumar Sanglod Parthivsinh 16 150950107065 Parekh Vidhisha Umeshkumar 16 150950107089 Jaydeepsinh 16 150950107118 Thakar Shrey Nileshkumar 17 150950107095 Shah Khushboo Chandrakant 17 150950107099 Shah Parth Divyesh 17 160953107016 Shah Princy Vrajeshkumar Chavda Urvashiben 18 150950107107 Sharma Harshda Anil 18 150950107101 Shah Sagar Sandipkumar 18 160953107005 Jashwantbhai 19 150950107112 Shukla Helly Rajeshkumar 19 150950107079 Patel Sneh Kamleshkumar 19 160953107011 Nigam Shefali Ajay 20 150950107066 Parikh Anuj Hemantkumar 20 160953107014 Parmar Premlal Santlal 20 150950107084 Purohit Pradip Shivlal 21 150950107103 Shah Samyak Yogesh 21 150950107121 Vamja Devvrat Rajeshbhai 21 150950107067 Parikh Nisarg Mrunal 22 150950107092 Shah Dhruv Ketankumar 22 150950107087 Raval Vandan Subject Teacher: Anuj Kumar Jain. . Dept. Joshi Barbhaya Bhavya Maitri Patel Bahniman Borah 18 150950107050 18 150950107008 18 150950107010 Manishkumar 19 150950107003 Agrawal Charmi Dushyant 19 150950107034 Harsh Patel 19 150950107061 Palak 20 150950107057 Naik Harshal Rohitbhai 20 150950107052 Mehta Zankhit Mukesh 21 150950107032 Goswami Priyank Brijeshgiri 21 150950107059 Neel Patel 22 150950107028 Doshi Rishi Deepakkumar 22 150950107007 Ayushi Prajapati Subject Teacher: Anuj Kumar Jain. ITM Universe Vadodara .Patel 15 150950107011 Bhagat Rahikumar Ramkrishna 16 160953107001 Kavinb. 1 150950107037 Jadav Priyal Pravinbhai 1 150950107029 Gandhi Helly 1 150950107014 Bhatt Valay Avinash Kotwal Khushbu Dipakbhai Himani Modi Kushagar Sharma 2 150950107045 2 150950107036 2 150950107048 3 150950107038 Jambekar Vishakha 3 150950107041 Joshi Heta Alkesh 3 150950107027 Diwakar Shubham Manohar 4 150950107040 Jivani Simran Yusufbhai 4 150950107049 Luhar Meet Jagdip 4 150950107024 Dhopte Abhiraj Pratap 5 150950107044 Khatri Kamal Ramkrishna 5 150950107009 Bansari Patel 5 150950107053 Mehul Purohit 6 150950107039 Jhanvi J Patel 6 150950107060 Ninawe Ruchi Prakash 6 160953107008 Aakurti Desai 7 150950107021 Desai Aneri Divyang 7 150950107020 Dave Richa Rajendra 7 150950107043 Kaydawala Moiz Fakhribhai 8 150950107012 Bhandari Namas Sanjiv 8 150950107017 Choxi Janki Nikunj 8 160953107006 Kush M . No.Anghan 16 150950107055 Modi Harsh Nayankumar 16 150950107005 Alves Christo Alfred 17 150950107022 Devanshi 17 150954107001 Rahul Jalindar Sonewane 17 160953107009 Shripad S. Sr. Assistant Professor. Batch . .A Batch-A1 Batch A2 Batch A3 Sr Sr.Dave 9 150950107026 Dialani Dimple Jairam 9 150950107058 Naman Sinha 9 150950107001 Aditya Hirenkumar Vaidya 10 150950107023 Dhamdhere Shalaka Sudhir 10 160953107002 Kevin B . Dept. Enrollment Name Enrollment Name Enrollment Name No. CSE. Dave 13 150950107046 Kukreja Jugal Gajendrapal 13 150950107051 Mehta Hariom 13 150950107025 Dhrumil Shah 14 150950107030 Gandhi Parth Vijaykumar 14 150950107018 Choyal Ketan Rajpal 14 150950107002 Advait G Ranade 15 150950107054 Shah Miral Mehulbhai 15 160953107015 Vidhi D . N o.Anghan 10 150950107015 Brahmbhatt Yash Rajendra 11 150950107031 Gonsalves Aaron Dominic 11 150950107004 Agrawal Drashti Kamlesh 11 150950107047 Kunj Rakeshkumar Patel 12 150950107056 Mokani Prashant Amrishbhai 12 150950107035 Harsh Pravinbhai Bhuva 12 160953107007 Rudra K. Insertion sort. 12 90 We reduce the effective size of the array to 4. Assistant Professor. 12. We then reduce the effective size of the array by one element and repeat the process on the smaller (sub)array. We have shown the largest element and the one at the highest index in bold. 63. For example. shown with array elements in sequence separated by commas: 63. The largest element in this effective array (index 0-3) is at index 1. it is popularly used to illustrate sorting. 90 The last effective array has only one element and needs no sorting. after such a step. 12. ITM Universe Vadodara . 75. making the highest index in the effective array now 3. the largest element will have ``bubbled'' from whatever its original position to the highest index position in the effective array. The entire array is now sorted. 27. 90. As in selection sort. 75. and the rightmost element is at the highest array index. Bubble Sort: An alternate way of putting the largest element at the highest index in the array uses an algorithm called bubble sort. each iteration reduces the effective size of the array. 75. consider the following array. bubble sort focuses on successive adjacent pairs of elements in the array. We begin by selecting the largest element and moving it to the highest index position. Merge sort and Quick sort Theory: Selection Sort: The idea of selection sort is rather simple: we repeatedly find the next largest (or smallest) element in the array and move it to its final position in the sorted array. 27 The leftmost element is at index zero. The two algorithms differ in how this is done. Dept. 27. nor as straightforward. We can do this by swapping the element at the highest index and the largest element. 12.e. in our case. and either swaps them or not. Experiment No: 1 Aim: Write a C program to implementation and Time analysis of sorting algorithms. 75. 4 (the effective size of our array is 5). We include it here as an alternate method. Assume that we wish to sort the array in increasing order. In either case. 90 12. Rather than search the entire effective array to find the largest element. While this method is neither as efficient. the larger of the two elements will be in the higher index position. Like selection sort. and the process is repeated. 27. as selection sort. the idea of bubble sort is to repeatedly move the largest element to the highest index position of the array. The focus then moves to the next higher position. i. We then swap the element at index 2 with that at index 4. The result is: 63. compares them. 63. CSE. 75. Selection sort. so we swap elements at index 1 and 3 (in bold): 63. The largest element in this effective array (index 0-4) is at index 2. Bubble sort. Unsorted array: 45 67 12 34 25 39 Effective array of size 6: 45 12 34 25 39 67 Effective array of size 5: 12 34 25 39 45 Effective array of size 4: 12 25 34 39 Effective array of size 3: 12 25 34 Effective array of size 2: 12 25 Effective array of size 1: 12 Subject Teacher: Anuj Kumar Jain. The process stops when the effective size of the array becomes 1 (an array of 1 element is already sorted). the smallest element at the beginning of the array and the largest element at the end. When the focus reaches the end of the effective array. 90 The next two steps give us: 27. The element at the highest index in the sub- array is inserted into the current sub-array. for i ← 1 to length[A] 2. Type numbers to be sorted. do key ← A[j] 3. for j ← 1 to length[A] 2. 4. for j ← 2 to length[A] 2. This algorithm is called insertion sort. do key ← A[j] 3. Dept. i←j-1 5. //Insert A[j] into the sorted sequence A[1 …. i←i–1 8. the effective size is increased. j . EOF to quit Inserting Element: 23 Sorted Array:23 Inserting Element:12 SORTED ARRAY: 12 23 Inserting Element:35 SORTED ARRAY: 12 23 35 Inserting Element:30 SORTED ARRAY: 12 23 30 35 Inserting Element:47 SORTED ARRAY: 12 23 30 35 47 Inserting Element:10 SORTED ARRAY: 10 12 23 30 35 47 Insertion sort can be adapted to sorting an existing array. then exchange A[j] ↔ A[j .1] 4. In this way. do if A[j] < A[j . do A[i + 1] ← A[i] 7. However. if we are reading the data into an array one element at a time.1]. Algorithm: BUBBLESORT (A) 1. Each step works with a sub-array whose effective size increases from two to the size of the array. etc.. do for j ← length[A] downto i + 1 3. Sorted array: 12 25 34 39 45 67 Insertion Sort: The two sorting algorithms we have looked at so far are useful when all of the data is already present in an array. we can take another approach .1] INSERTION-SORT (A) 1. A[i + 1] ← key SELECTION-SORT (A) 1.insert each element into its sorted position in the array as we read it. while i > 0 and A[i] > key 6. loc ← j 4. and we wish to rearrange it into sorted order. ITM Universe Vadodara . we can keep the array in sorted form at all times. Assistant Professor. CSE. for i ← j to length[A] 5. do if A[i ] < key Subject Teacher: Anuj Kumar Jain. if p < r 2. 6. Subject Teacher: Anuj Kumar Jain. exchange A[i ] ↔ A[ j ] 7. MERGE(A. then 3. for i ← 1 to n1 5. r) 5. r) 3. p. L[n1 + 1] ← ∞ 9.h> #include <string. p. q. then q ← PARTITION(A. q) 4. p. n2 ← r . if loc !=j 9. then Swap A[j] & A[i] MERGE-SORT(A.int). then A[k] ← L[i] 15. r) 1.int). for k ← p to r 13. QUICKSORT(A. then i ←i + 1 6.int.p + 1 2. r) PARTITION(A. then key=A[i] 7. else A[k] ← R[j] 17.h> #include <stdlib. r) MERGE(A. Assistant Professor. void merge(int *. p.int. loc=i 8. q. QUICKSORT(A. j←j+1 QUICKSORT(A. MERGE-SORT(A. r) 1. for j ← 1 to n2 7. ITM Universe Vadodara .h> void insertion_sort(int *. n1 ← q . r) 1. do if L[i] ≤ R[j] 14. if p < r 2. //create arrays L[1 _ n1 + 1] and R[1 _ n2 + 1] 4.int). void quick_sort(int *. i ← 1 11. p. Dept. i←i+1 16.h> #include <limits. CSE. r) 1.int). q − 1) 4. void selection_sort(int *. R[n2 + 1] ← ∞ 10. j ← 1 12. do L[i] ← A[p + i .1] 6. void merge_sort(int *.int. do R[j] ← A[q + j] 8. do if A[ j ] ≤ x 5. q + 1. for j ← p to r − 1 4. q + 1.int). x ← A[r] 2.q 3. p.h> #include <malloc. MERGE-SORT(A.h> #include <time. p.int. p. return i + 1 Source Code: #include <stdio. i ← p − 1 3. exchange A[i + 1] ↔ A[r] 8. fprintf(fptr."\t %lf ". insertion_sort(b. merge_sort(b.n*sizeof(int)). b=(int *)malloc(sizeof(int)*(n+2)). for(i=1. bubble_sort(b. gettimeofday(&start. ITM Universe Vadodara .int.diff1). memcpy(&b[1]. memcpy(&b[1]. } for(n=100. fprintf(fptr."\t %lf ". diff1 = ((end. Assistant Professor.&a[1]. Dept.tv_sec * 1000000 + end.n).&a[1].txt". fprintf(fptr.diff2.n*sizeof(int)).(start.tv_sec * 1000000 + end.tv_sec * 1000000 + start.*b.tv_sec * 1000000 + end.tv_usec).tv_usec))/1000000. FILE *fptr. struct timeval start.n). NULL). NULL). gettimeofday(&start. Subject Teacher: Anuj Kumar Jain. if(fptr==NULL) { printf("ERROE! File Not found").tv_usec))/1000000.i<=n.tv_sec * 1000000 + start.int).tv_usec))/1000000.1.&a[1].n). fprintf(fptr. gettimeofday(&end.tv_sec * 1000000 + start.0.diff1).n=n+100) { printf("\n Number of element is %d".i++) a[i]=rand()%200. void bubble_sort(int *. NULL). diff1 = ((end.int). exit(0).int)."a"). diff1 = ((end.sizeof(int)*n).tv_sec * 1000000 + start. void print(int *. NULL).n. gettimeofday(&end.i. end.n).tv_usec). double diff1.&a[1]. NULL). memcpy(&b[1]. selection_sort(b. NULL). n. a=(int *)malloc(sizeof(int)*(n+2)). diff1 = ((end.tv_usec))/1000000.n<=100000.(start."\t %lf ".(start. gettimeofday(&end. gettimeofday(&start. NULL).tv_sec * 1000000 + end. void main() { int *a.n*sizeof(int)). gettimeofday(&start.diff1).(start.tv_usec). gettimeofday(&start. NULL). fprintf(fptr.tv_usec).0.0. NULL).diff1). memcpy(&b[1]." \n %d \t %lf ".n).0.int partition(int *." \n bubble_sort \t selection_sort \t insertion_sort \t merge_sort \t quick_sort "). fptr=fopen("anujsort. CSE. gettimeofday(&end. n). memcpy(&b[1]."\t %lf ". } void bubble_sort(int *b.key. Assistant Professor. j--. key=b[i].j<n-i+1.j.&a[1].int n) { int i.i++) { for(j=1.loc.tv_sec * 1000000 + end.tv_sec * 1000000 + start. } void print(int *b.int n) { int i. quick_sort(b.min.tv_usec). b[j+1]=temp.i++) { j=i-1.temp.i<n. for(i=1. Dept. Subject Teacher: Anuj Kumar Jain. NULL). while(j>0 && b[j]>key) { b[j+1]=b[j].i<=n.int n) { int i. fprintf(fptr. diff1 = ((end. free(a).i<n.b[i]). for(i=2. } } } } void insertion_sort(int *b.j.j. } } void selection_sort(int *b.i++) printf("%d \t".n*sizeof(int)). } b[j+1]=key.0.i++) { loc=i. b[j]=b[j+1].int n) { int i. ITM Universe Vadodara . gettimeofday(&end. CSE.temp.(start.i<=n. for(i=1. for(i=1.tv_usec))/1000000.diff1).j++) { if(b[j]>b[j+1]) { temp=b[j].1. } fclose(fptr). free(b). } void merge_sort(int *b.j<=n.r). b[i]=b[j].temp. } } int partition(int *b. for (j=i+1.int p. b[i]=b[loc]. quick_sort(b.p. quick_sort(b. if(p<r) { q=(p+r)/2. Subject Teacher: Anuj Kumar Jain. b[loc]=temp.p.q-1).j<r. for(j=p.r). b[i+1]=b[r]. Assistant Professor. return i+1. } } temp=b[i+1]. b[r]=temp.int r) { if (p<r) { int q=partition(b.int p.q+1.q+1.j++) { if(b[j]<min) { loc=j.int r) { int q. ITM Universe Vadodara .r).j.int r) { int x=b[r]. merge_sort(b. merge_sort(b.i.j++) { if(b[j]<=x) { i++. b[j]=temp. } } } void quick_sort(int *b. min=b[j].int p.q).p. } } if(i!=loc) { temp=b[i]. Dept. min=b[i]. temp=b[i]. i=p-1. CSE. r1[60000]. memcpy(&r1[1].150445 0.00042 0.402069 0.&b[p].000724 0.001042 0.00237 0.003006 10000 0.int r) { int n1. for(k=p.n1*sizeof(int)).000126 0.113992 0.00962 0.006855 Subject Teacher: Anuj Kumar Jain.001121 0.003096 0.084855 0.069593 0.002874 0.001284 0.j.000165 0.k. Dept.k<=r.011453 0. merge(b.r).102549 0.019969 0.001022 0.000166 0.045863 0.p. input Bubble sort Selection sort Insertion sort Merge sort Quick sort 100 0.001486 0.52792 0.004764 0.038178 0.002593 0.003801 0. j++.002499 9000 0. CSE. n2=r-q.238147 0.0002 0.00061 4000 0.002102 0.p.000036 0.000923 0.000242 0.00163 7000 0.001222 6000 0.00111 0.i.000058 300 0.001311 0.000162 0.001638 0.001802 0. //printf("\n p=%d q=%d r=%d".000281 0.003556 15000 1.016044 0.06142 0.n2*sizeof(int)).081687 0.000123 500 0. Assistant Professor.int q.526018 0.q.000537 0.000462 0.000108 0.000168 600 0.000159 2000 0. i++. n1=q-p+1.001326 0.318477 0. //printf("\n n1=%d n2=%d".&b[q+1].000127 800 0.005373 0.021366 0.e. } } void merge(int *b.031555 0.001659 0.000179 700 0.130002 0.000296 0.28426 0.000069 0. ITM Universe Vadodara . memcpy(&l[1].000053 0.002463 0.001594 0.001725 0.n2).000391 3000 0.n2.000141 0. int l[60000]. j=1.000932 5000 0.000534 0.002044 8000 0.00035 0.236734 0.000092 400 0.000122 0.q. r1[n2+1]=INT_MAX.192691 0.00428 0.000976 0.r).000913 0.000071 0.000227 900 0.115334 0.190412 0.003982 0. } else { b[k]=r1[j].000917 0.037803 0.001899 0. } } } Output: No of Time Analysis of sorting in Sec.058946 0.000027 200 0.169236 0.k++) { if(l[i]<r1[j]) { b[k]=l[i].n1. l[n1+1]=INT_MAX.int p.000173 0.000147 1000 0. i=1. 147911 0.10226 0.232583 8.028862 7.796114 9.952158 2.332151 23.124209 80000 35.119049 11.102747 18.499344 11.164825 0.897662 0.016298 30000 4.057679 55000 17.0059 0.022568 35000 6.877661 3.910255 9.365039 0.771687 2.958179 5.400077 12.140159 85000 40.523092 13.003879 0.108001 75000 31.004879 0.960021 21.954277 10.405512 1.193779 100000 56.959093 14.131593 0.466667 6.607007 0.094225 70000 27.029898 40000 8.015491 0.977144 8.018522 0.505142 0.578016 0.425842 4.175276 95000 50.107938 1. CSE.746013 2.014401 0.790779 0.098946 3.019449 0.01679 0.080847 65000 23.608704 16.007999 0.149979 0. 20000 2. Dept.145813 0.868809 1.006992 0.163347 7. ITM Universe Vadodara .214541 Analysis Chart of all above sorting: Analysis chart of merge sort and quick sort: Subject Teacher: Anuj Kumar Jain.017307 0.010005 0.464137 0. Assistant Professor.086117 0.011145 0.016409 0.020544 0.01109 25000 3.544112 0.937573 0.038039 50000 13.156727 90000 46.013053 0.068813 60000 20.851672 3.345727 0.012111 0.229253 0.901025 5. Assistant Professor. Effectiveness e. Dept. Definiteness d. Merge Sort and Quick Sort are shown in table 1: Sorting Algorithm Best Case Average Case Worst Case bubble sort Selection Sort. worst case and Average case.Complexity of bubble sort Selection Sort. Insertion Sort. Input b.. i. merge sort and quick sort. Insertion Sort. Output c. Ans: Complexity of bubble sort Selection Sort. selection sort.How is an algorithm’s time efficiency measured? Ans: Time efficiency indicates how fast the algorithm runs. What are the characteristics of an algorithm? Every algorithm should have the following five characteristics a.e. O(n) Merge Sort Quick Sort Q 2. What is an algorithm? Ans: An algorithm is a sequence of unambiguous instructions for solving a problem. ITM Universe Vadodara . Basic operation is the most time consuming operation in the algorithm’s innermost loop. An algorithm’s time efficiency is measured as a function of its input size by counting the number of times its basic operation (running time) is executed. insertion sort. Conclusion: We have successfully implemented the following sorting algorithm such as bubble sort. Form the implementation part it is concluded that for the worst case scenario merge sort is better than among the sorting technique but it also required the extra memory space in order of Question & Answer: Q 1. for obtaining a required output for any legitimate input in a finite amount of time Q 3. Merge Sort and Quick Sort in best case. Q 4. Termination Subject Teacher: Anuj Kumar Jain. CSE. Insertion Sort. if there exist some positive constant c and some nonnegative integer n0 such that t(n) ≥cg(n) for all for all n ≥ n0 Q 7. ITM Universe Vadodara . Basic operation: the operation that contributes most towards the running time of the algorithm The running time of an algorithm is the function defined by the number of steps (or amount of memory) required to solve input instances of size n. CSE. Q 6. denoted t(n) Є Ω((g(n)) . In other words. satisfies the following constraints to be minimum. an algorithm can be defined as a sequence of definite and effective instructions. Assistant Professor. when the design. which terminates with the production of correct output from the given input. Time efficiency .What do you mean by time complexity and space complexity of an algorithm? Time complexity indicates how fast the algorithm runs.how fast an algorithm in question runs. i. Q 5. an algorithm is a step by step formalization of a mapping function to map input set onto an output set. Space complexity deals with extra memory it require. Time efficiency is analyzed by determining the number of repetitions of the basic operation as a function of input size. Space efficiency – an extra space the algorithm requires b. Define Big Omega Notations A function t(n) is said to be in Ω(g(n)) . The algorithm has to provide result for all valid inputs. Therefore. if t(n) is bounded below by some positive constant multiple of g(n) for all large n.e.What are the different criteria used to improve the effectiveness of algorithm? a. viewed little more formally. i. The effectiveness of algorithm is improved. Subject Teacher: Anuj Kumar Jain.. Dept. ii. int.int. int). item) 1. binary_search(a. n. 5. if(item==a[i]) 3. binary_search(a. r. consider an array of integers of size NN. if(a[q]==item) 6. else if (a[q]>item) 8. Another approach to perform the same task is using Binary Search.h> #include <stdlib. A simple approach is to do linear search. else 4. 9. return NULL 3. A simple approach is to do linear search. i. CSE. else 10. if(p>r) 2.p.h> #include <time.q+1. You should find and print the position of all the elements with value xx. p.int). void binary_search(int *. retun NULL Binary Search (a. Binary Search: Search a sorted array by repeatedly dividing the search interval in half. return q 7.item). Assistant Professor. the linear search is based on the idea of matching each element from the beginning of the list to the end of the list with the integer xx. if(i>n) 5. Subject Teacher: Anuj Kumar Jain. write a function to search a given element x in arr[]. Experiment No: 2 Aim: Write a C program to implementation and Time analysis of linear and binary search algorithm. return -1. Theory: Linear Search: Linear search is used on collections of items.h> void linear_search(int *.e Start from the leftmost element of arr[] and one by one compare x with each element of arr[] If x matches with an element. For example. Begin with an interval covering the whole array. return the index. Repeatedly check until the value is found or the interval is empty.h> #include <limits. ITM Universe Vadodara . Source Code: #include <stdio.r. return loc 4. item) 1. Otherwise narrow it to the upper half. q=(p+r)/2. Dept.item). If x doesn’t match with any of elements. narrow the interval to the lower half. and then printing the position of the element if the condition is `True'. Algorithm: Linear search (a. for i ← 1 to n do 2. Here. The time complexity of above algorithm is O(n). If the value of the search key is less than the item in the middle of the interval.int . Given an array arr[] of n elements.q-1. It relies on the technique of traversing a list from start to end by exploring properties of all the elements that are found on the way. else Subject Teacher: Anuj Kumar Jain.n.tv_usec))/1000000. Dept.total). gettimeofday(&start.tv_sec*1000000+end. for(i=1.NULL).i++) if(item==a[i]) { printf("\n Element is found at %d". gettimeofday(&end. total=((end. ITM Universe Vadodara .tv_usec))/1000000.i<=n.1. gettimeofday(&end.end. exit(1).tv_sec*1000000+start. Assistant Professor.total).item). void main() { int *a. for (i=1. //printf("Enter the Searching item").int ).total)."number of input \t Time for Linear Search \t time for Binary Search"). if(fptr==NULL) { printf("\n ERROR! File is not Found").n.tv_sec*1000000+end. total=((end.int item) { int i.00. struct timeval start.n=n+1000) { a=(int *)malloc(n*sizeof(int)). linear_search(a.total).int n.NULL) binary_search(a. gettimeofday(&start.void insertion_sort(int *. item=rand()%200.item."a+").int r.&item).NULL).i++) a[i]=rand()%200. } void linear_search(int *a.tv_usec)-( start. //scanf("%d".NULL). if(p>r) printf("\n Element is not Found"). double total. fprintf(fptr.tv_usec)+( start.n<=100000. } fclose(fptr)."\n %d \t %lf \t".n.n.i.int p. fptr=fopen("abc. //printf("Total Time for Linear Search =%lf". insertion_sort(a.txt"."\n %d \t %lf \t". } void binary_search(int *a. fprintf(fptr.int item) { int q. } if(i>n) printf("\n Element is not found"). FILE *fptr.00. for(n=1000.&n). break. //printf("Total Time for Linear Search =%lf".n.i).i<=n. } //printf("Enter the number of input"). CSE.item). //scanf("%d".tv_sec*1000000+start.n). fprintf(fptr. Dept.000046 0. Assistant Professor.r.int n) { int i.000009 20000 0.00001 8000 0.000239 0.000021 0.000026 4000 0.q).000035 0.000288 0.00004 0.000215 0.000009 9000 0. { q=(p+r)/2.000121 0.000019 40000 0.000097 0. j--.000051 0.item).000312 0.q+1.000011 45000 0.key.000011 65000 0.000074 0.000144 0. } } } void insertion_sort(int *a.00001 70000 0. else binary_search(a.000014 7000 0.i<=n.000011 25000 0.000027 0.000023 3000 0.q-1.000014 50000 0. if(a[q]==item) printf("\n Element is found at %d".000037 0. } } Output: No of Time Analysis in Sec. j=i-1.000011 30000 0.000013 35000 0.000191 0.00001 15000 0.000012 Subject Teacher: Anuj Kumar Jain.p.i++) { key=a[i].000039 0.000266 0.000334 0.000008 5000 0.j. else { if (a[q]>item) binary_search(a.00001 10000 0.000025 6000 0.000029 0.00004 0.000011 60000 0. } a[j+1]=key.000357 0.000012 2000 0.00001 55000 0. ITM Universe Vadodara . for(i=2. while(j>0 && a[j]>key) { a[j+1]=a[j]. Input Linear Search Binary Search 1000 0.item).000168 0.000011 75000 0. CSE. 000013 100000 0. Form the implementation part it is concluded that Binary search is better than linear search.000009 85000 0.00038 0.0003 Linear Search 0.0002 Binary Search 0.000404 0.. How divide and conquer technique can be applied to binary trees? Since the binary tree definition itself divides a binary tree into two smaller structures of the same type. Otherwise. Q 6.0004 Times in Sec 0.000013 95000 0..... Q 4. Efficiency of linear search? The time taken or the number of comparisons made in searching a record in a search table determines the efficiency of the technique. It works by comparing a search key K with the array's middle element A[m]. called a search key.000012 TIme Analysis of Birary Search & Linear Search 0.. Assistant Professor. What do you understand by the term “linear search is unsuccessful”? Subject Teacher: Anuj Kumar Jain..A[m-l] A[m] A[m+l]... Q 9. Question & Answer: Q 1. Define: linear search? In linear search. Define Binary Search Binary Search is remarkably efficient algorithm for searching in a sorted array.00043 0. Q 7. in a given set Q 5. the left sub tree and the right sub tree.0005 0....0001 0 1 6 71 76 11 16 21 26 31 36 41 46 51 56 61 66 81 86 91 96 No of Input *100 Conclusion: We have successfully implemented the linear search and binary search. How many types of searching are there? There are basically two types of searching:-linear search and Binary search.. Dept.000012 90000 0.. we access each elements of an array one by one sequentially and see whether it is desired element or not... What can we say about the average case efficiency of binary search? A sophisticated analysis shows that the average number of key comparisons made by binary search is only slightly smaller than that in the worst case O Q 3. many problems about binary trees can be solved by applying the divide- conquer technique. A[n-l] Q 2..000451 0.00048 0. the same operation is repeated recursively for the first half of the array if K< A[m] and for the second half if K > A[m] A[0]... If they match... 80000 0. What do you mean by “Searching” problem? The searching problem deals with finding a given value... Q 8.. the algorithm stops.0006 0. ITM Universe Vadodara . Why binary search method is more efficient then liner search? It is because less time is taken by linear search to search an element from the sorted list of elements. CSE. Q 14. Q 12. Search will be unsuccessful if all the elements are accessed and the desired elements are not found. of average case we may have to scan half of the size of the array (n/2). Subject Teacher: Anuj Kumar Jain. when the record is present in first position then how many comparisons are made? Only one comparison. Q 13. Dept. What is the drawback of linear search? There is no requisite for the linear Search. Define the complexity of binary search? In worst cause there is log (n+1) in the average causes. Q 15. Q 10. What do you understand by binary search? A binary search algorithm or binary chop is a technique for finding a particular value in a sorted list. when the record is present in last position then how many comparisons are made? N comparisons have to be made. During linear search. Q 11. ITM Universe Vadodara . During linear search. the no. What is worse case? In the worse case. CSE. Explain the steps of algorithm of binary search? Step 1: Find the middle element of array f+1/2 = middle value Step 2: Our derived element is greater than the main element Q 16. Assistant Professor. Following procedure are used in a sorting algorithm and a priority-queue data structure. and right child RIGHT(i ) can be computed. which runs in O(n lg n) time. If we now “discard” node n from the heap (by decrementing heap-size[A]). left child LEFT(i ). The Subject Teacher: Anuj Kumar Jain. Since the maximum element of the array is stored at the root A[1]. An array A that represents a heap is an object with two attributes: length[A]. The root of the tree is A[1]. ITM Universe Vadodara . Assistant Professor. the specifics of which depend on the kind of heap. and the subtree rooted at a node contains values no larger than that contained at the node itself. is the key to maintaining the max-heap property. where heap-size[A] ≤ length[A]. which is filled from the left up to a point. Each node of the tree corresponds to an element of the array that stores the value in the node. the max-heap property is that for every node I other than the root. heap-size[A]. n]. its height is . Experiment No: 3 Aim: Implementation of max-heap sort algorithm Theory: Heaps: The (binary) heap data structure is an array object that can be viewed as a nearly complete binary tree. length[A]] may contain valid numbers. we observe that can easily be made into a max-heap. For the heap sort algorithm. There are two kinds of binary heaps: 1) max-heaps 2) min-heaps In both kinds. Thus. and given the index i of a node. The MAX-HEAPIFY procedure. the largest element in a max-heap is stored at the root. no element past A[heap-size[A]]. the value of a node is at most the value of its parent. Since a heap of n elements is based on a complete binary tree. which runs in linear time. . the indices of its parent PARENT(i ). The heapsort algorithm starts by using BUILD-MAX-HEAP to build a max-heap on the input array A[1 . The BUILD-MAX-HEAP procedure. sorts an array in place. The smallest element in a min-heap is at the root. That is. A min-heap is organized in the opposite way. is an element of the heap. The HEAPSORT procedure. the min-heap property is that for every node i other than the root. we use max-heaps. The tree is completely filled on all levels except possibly the lowest. We shall see that the basic operations on heaps run in time at most proportional to the height of the tree and thus take O(lgn) time. CSE. Viewing a heap as a tree. produces a maxheap from an unordered input array. Min- heaps are commonly used in priority queues. where n = length[A]. the number of elements in the heap stored within array A. . it can be put into its correct final position by exchanging it with A[n]. although A[1 . which is the number of elements in the array. and we define the height of the heap to be the height of its root. the values in the nodes satisfy a heap property. which runs in O(lg n) time. that is. we define the height of a node in a heap to be the number of edges on the longest simple downward path from the node to a leaf. In a max-heap. Dept. Subject Teacher: Anuj Kumar Jain.h> #include <stdlib.i.h> void heap_sort(int []. ITM Universe Vadodara . The heapsort algorithm then repeats this process for the maxheap of size down to a heap of size 2. MAX-HEAPIFY(A.h> #include <malloc. CSE. void main() { int *a. Dept. Algorithm: HEAPSORT(A) 1.n<=1000000."a"). 1) BUILD-MAX-HEAP(A) 1. return LEFT(i ) 1. Assistant Professor. then exchange A[i ] ↔ A[largest] 10. for i ← length[A]/2 downto 1 3. r ← RIGHT(i ) 3. return 2i + 1 Source Code: // Heap Sort problem #include <stdio. if l ≤ heap-size[A] and A[l] > A[i ] 4. heap-size[A] ← heap-size[A] − 1 5. i ) 1. 1).h> #include <limits.int ).children of the root remain max-heaps. then largest ←r 8. largest) PARENT(i ) 1. however. void build_heap(int []. if largest != i 9. for i ← length[A] downto 2 3. double diff.n. which leaves a max-heap in . for(n=10000.n+=10000) { printf("Enter the number:"). do MAX-HEAPIFY(A. BUILD-MAX-HEAP(A) 2. All that is needed to restore the max-heap property. end. then largest ←l 5. fptr=fopen("heapsort.int . l ← LEFT(i ) 2. is one call to MAX-HEAPIFY(A. printf("%d".int). if r ≤ heap-size[A] and A[r] > A[largest] 7.h> #include <time. MAX-HEAPIFY(A. FILE *fptr.txt". do exchange A[1] ↔ A[i ] 4. struct timeval start. else largest ←i 6.int). i ) MAX-HEAPIFY(A. return 2i RIGHT(i ) 1. void maxheapify(int []. but the new root element may violate the max-heap property. heap-size[A] ← length[A] 2.n). fprintf(fptr,"%d \t",n); a=(int*)malloc((n+1)*sizeof(int)); for(i=1;i<=n;i++) a[i]=rand(); gettimeofday(&start, NULL); heap_sort(a,n); gettimeofday(&end, NULL); diff = ((end.tv_sec * 1000000 + end.tv_usec)- (start.tv_sec * 1000000 + start.tv_usec))/1000000.0; printf(" heapsort took: \t %lf \t sec.\n", diff); fprintf(fptr,"%lf\n", diff); /* printf("\n print the sorted element\n"); for(i=1;i<=n;i++) printf("%d \t",a[i]); */ free(a); } fclose(fptr); } void heap_sort(int a[],int n) { int i,temp,j; build_heap(a,n); for (i=n;i>1;i--) { temp=a[i]; a[i]=a[1]; a[1]=temp; maxheapify(a,1,i-1); } } void build_heap(int a[],int n) { int i; for (i=n/2;i>=1;i--) { maxheapify(a,i,n); } } void maxheapify(int a[],int i,int n) { int r=i*2+1; int l=i*2; int largest,temp; if(l<=n && a[l]>a[i]) largest=l; else largest=i; if (r<=n && a[r]>a[largest]) largest=r; if(largest!=i) { temp=a[i]; a[i]=a[largest]; a[largest]=temp; maxheapify(a,largest,n); } } Subject Teacher: Anuj Kumar Jain, Assistant Professor, Dept. CSE, ITM Universe Vadodara Output: Conclusion: Question & Answer: Q 1. Define min-heap A min-heap is a complete binary tree in which every element is less than or equal to its children. All the principal properties of heaps remain valid for min-heaps, with some obvious modifications Subject Teacher: Anuj Kumar Jain, Assistant Professor, Dept. CSE, ITM Universe Vadodara Experiment No: 4 Aim: Implementation and Time analysis of factorial program using iterative and recursive method Theory: There are two different methods to perform the repeat task are given follow: Recursive approach: In recursive approach the function calls itself until the condition is met. And it is slower than iteration, which means it uses more memory than iteration. Recursion is like a selection structure, and which makes code smaller and clean. And a function partially defined by itself. Here tracing the code will be more difficult in the case large programs. Iterative approach: Iterative approach is a repetition process until the condition fails, here loops are used such as for, while etc. Here code may be longer but it is faster than recursive. And it consumes less memory compared to recursive approach. If the loop condition is always true in such cases it will be an infinite loop. Factorial of a non-negative integer n, denoted by n!, is the product of all positive integers less than or equal to n. Logic of calculating Factorial is very easy . 5! = 5 * 4 * 3 * 2 * 1 = 120. It can be calculated easily using any programming Language. But if we implement with C & C++ programming language. The maximum size of data type is “long long int”. It cannot store more than factorial of 20. So we have modify the method to calculate the factorial. Algorithm: Iterative_factorial(n): 1. Create an array ‘res[ ]’ of MAX size where MAX is number of maximum digits in output. 2. Initialize value stored in ‘res[ ]’ as 1 and initialize ‘res_size’ (size of ‘res[ ]’) as 1. 3. Do following for all numbers from x = 2 to n...... i. Multiply x with res[ ] and update res[ ] and res_size to store the multiplication result. multiply (res, x) 1. Initialize carry as 0. 2. Do following for i = 0 to res_size – 1 ..... i. Find value of res[i] * x + carry. Let this value be prod. ii. Update res[i] by storing last digit of prod in it. iii. Update carry by storing remaining digits in carry. 3. Put all digits of carry in res[ ] and increase res_size by number of digits in carry Source Code: #include <stdio.h> #include <malloc.h> #include <stdlib.h> #include <time.h> short int *a; int digit=1; void fact_loop(int ); void muyltiply(int ); void fact_recursive(int); int main() { int n, i; struct timeval start, end; double diff1,diff2; FILE *fptr; fptr=fopen("Factorial1.txt","a"); //printf("Enter the number"); //scanf("%d",&n); a=(short int *)malloc(200000*sizeof(short int)); for(n=100;n<10100;n+=100) Subject Teacher: Anuj Kumar Jain, Assistant Professor, Dept. CSE, ITM Universe Vadodara digit=1.diff2). //for(i=digit-1. return 0. printf(" fact_loop took: \t %lf \t sec. NULL).i--) printf("%d". fact_loop(n).i--) // printf("%d". //fprintf(fptr. i++) muyltiply(i). printf(" fact_recursive took: \t %lf \t sec. */ //fclose(fptr). fact_recursive(n). diff2).a[i]). diff1).tv_sec * 1000000 + end. printf("digit=%d\n". gettimeofday(&start. diff1 = ((end. diff). CSE. for(i=1. NULL).0.\n".diff1. } fclose(fptr).(start. //a=(int *)malloc(1000*sizeof(int)). a[j]=x%10. Assistant Professor.n). } Subject Teacher: Anuj Kumar Jain. free(a). //a=(int *)malloc(1000*sizeof(int)). ITM Universe Vadodara . i<=n.tv_usec))/1000000.j. NULL).tv_sec * 1000000 + start. gettimeofday(&end. gettimeofday(&end.{ printf("Enter the number %d". for(i=digit-1. carry=0. a[0]=1.0.tv_sec * 1000000 + end. gettimeofday(&end. } void fact_loop(int n) { int i.tv_usec). } void muyltiply(int i) { int x.tv_sec * 1000000 + start. /*printf("Recursive Factorial: \n"). a[0]=1. carry = x/10. digit=1.n. for(j=0.a[i])."%lf\t".i>=0. printf("\n").(start. j++) { x = a[j]*i+carry. Dept. NULL).i>=0."%d \t %lf\t %lf \n".tv_usec))/1000000. fprintf(fptr.tv_usec).\n". j<digit. diff2= ((end. a[0]=1. //free(a).carry.digit). //free(a). digit++.002912 400 0.298584 1.000437 0. } } Output: no of Time Analysis in Sec input Iterative Method Recursive Method 100 0.242961 1.06962816 7000 2.001123 300 0.033726 1500 0.000231 200 0.13663731 4500 0.13895294 6000 1. CSE.015234 800 0.0013 0.655093 1.317531 1.351643 1. Dept. } } void fact_recursive(int i) { if(i==1) return.026574 1000 0.14520325 6500 1.151271 1.12669366 8000 2.007885 600 0. carry= carry/10. fact_recursive(i-1). while(carry>0) { a[digit] = carry%10.004987 0.012009 0. Assistant Professor.15109924 5000 1.000094 0.166788 1.009265 0.79668 1.003788 0.565919 1.894668 1.497035 1.890322 1.15369869 3500 0.17803407 2500 0. else { muyltiply(i).1504023 9000 3.044891 1.1426058 5500 1.17172153 2000 0.14359431 7500 2.015283 0.1472818 8500 3.080484 1.16877274 4000 0.006959 0.837552 1.005537 500 0.002493 0.1353963 Subject Teacher: Anuj Kumar Jain.651711 1.011307 700 0.020654 900 0.16977897 3000 0.13910119 10000 4. ITM Universe Vadodara .500996 1. CSE. Dept. ITM Universe Vadodara .Conclusion: To calculate the factorial of a number there are two methods which are described above. Subject Teacher: Anuj Kumar Jain. After the computation of C program. the result shows that Iterative method is almost 2 times faster than recursive method. Assistant Professor. 1. . It stores the c[i. j] values in the table. . .i++) scanf("%d".int *. for(i=1.{i} is an optimal solution for W . wn>. w] contains the maximum value that can be picked into the knapsack.wi] additional value. n.m. for(i=1. . in which case it is a subproblem's solution for (i .*p.h> #include<time. i. void select_item(int **. . v2. c[n. a two dimensional array. thief can choose from item 1. Subject Teacher: Anuj Kumar Jain. and thief can choose from items w . printf("Enter the capacity of the Knapsak").h> #include<stdlib. the first row of c is filled in from left to right.n. . and the two sequences v = <v1. Then S` = S . thief takes vi value. then the second row. . in which case it is vi plus a subproblem solution for (i . w=(int *)malloc(sizeof(int)*n). if thief decides not to take item i. vn> and w = <w1.int *. printf("Enter the profit of item ").wi. w] to be the solution for items 1. printf("Enter the weight of item ").int ).i<=n. w] value. Assistant Professor. void main() { int i. Algorithm: Source Code: #include<stdio. c[i-1. i and maximum weight w. The better of these two choices should be made.1) items and the weight excluding wi. That is. Experiment No: 5 Aim: write a C program to implementation of a knapsack problem using dynamic programming. . . and so on.m).i<=n. On other hand.2. scanf("%d". and get c[i .p. . . w] max [vi + c[i-1.1 upto the weight limit w.&p[i]). 0 . .int . Dept.n. ITM Universe Vadodara . We can express this fact in the following formula: define c[i.1. p=(int *)malloc(sizeof(int)*n).wi pounds and the value to the solution S is Vi plus the value of the subproblem. and get c[i . w]} if wi ≥ 0 if i>0 and w ≥ wi This says that the value of the solution to i items either include ith item. . w-wi]. the number of items n.h> void knapsack(int *.i++) scanf("%d". w . w] whose entries are computed in a row-major order. CSE.2.w] = { c[i-1.&w[i]). if the thief picks item i.int ). At the end of the computation. . knapsack(w. scanf("%d". .&n).&m).int . w2. .*w. Theory: Let i be the highest-numbered item in an optimal solution S for W pounds.1) items and the same weight. that is. . printf("Enter the numebr of item"). That is. or does not include ith item.h> #include<malloc. c[0 . . The algorithm takes as input the maximum weight W. Then 0 if i = 0 or w = 0 c[i. printf("\n"). c=(int **)malloc(sizeof(int*)*(n+1)).c[n][m]).i.w.c[i-1][j]).i<=n.int *p. for(i=0. } /*printf("\n cost matrix\n"). for(i=0. }*/ printf("\n Maximum profit from 0/1 Knasack=%d". printf("%d \t".int i.i-1.i<=n. Assistant Professor.j++) printf("%d \t".j++) c[0][j]=0. } else c[i][j]=c[i-1][j].i++) c[i]=(int *)malloc(sizeof(int)*(m+1)).c[i][j].int m) { int **c. } void select_item(int **c.w. int i.n.int j) { //printf("\n i=%d j=%d\n".j. else { if(c[i][j]!=c[i-1][j]) { select_item(c.int *w.j-w[i]).i++) { for(j=1. printf("selected Items are"). for(i=0. for(i=1.i-1. select_item(c.i<=n.i<=n.j<=m. ITM Universe Vadodara .i++) { for(j=0. else c[i][j]=c[i-1][j]. for(j=1. Dept.c[i][j]). CSE. if(i==0 || j==0) return .w.int n.j). } else { select_item(c.j++) if (w[i]<=j) { if(p[i]+c[i-1][j-w[i]]>c[i-1][j]) c[i][j]=p[i]+c[i-1][j-w[i]].j). } } } Output: Subject Teacher: Anuj Kumar Jain.m).} void knapsack(int *w.j<=m.i++) c[i][0]=0.i). //printf("\n ci=%d cj=%d\n".j<=m. CSE. ITM Universe Vadodara . Assistant Professor. Dept.Conclusion: Question & Answer: Subject Teacher: Anuj Kumar Jain. int i. if i = j 2. void main() { unsigned int *p. PRINT-OPTIMAL-PARENS(s. for(i=1.n).&n). printf("Enter the no of matrix:"). j ]← q 12. CSE. return m and s PRINT-OPTIMAL-PARENS(s.i++) p[i]=rand()%40. ITM Universe Vadodara . } Subject Teacher: Anuj Kumar Jain. do q ← m[i.i<=n. then print “A”i 3.i++) printf("%d\t".q.n. s[i]=(int *)malloc(n*sizeof(int)).h> void matrix_chain(int *. then m[i. print “)” Source Code: #include <stdio. if q < m[i.int ).l. Experiment No: 6 Aim: Implementation of chain matrix multiplication using dynamic programming. j ] + pi−1 pk pj 10. j ]←∞ 8. j ) 1. Algorithm: MATRIX-CHAIN-ORDER(p) 1.i++) { c[i]=(long *)malloc(n*sizeof(long)). do m[i. s[i. Assistant Professor. m[i. for l ← 2 to n // l is the chain length.h> #include <time.h> #include <limits. s[i. printf("\n Random Value of Matrix size").i<=n.**s. n ← length[p] − 1 2.int ). for(i=0. 5. s[i. scanf("%d". j ) 6.int n) { long **c. for i ← 1 to n 3. c=(long **)malloc(n*sizeof(long *)). do for i ← 1 to n − l + 1 6. for(i=0. j ] 11. matrix_chain(p.p[i]). j ] + 1. PRINT-OPTIMAL-PARENS(s.k. j ]) 5. k] + m[k + 1. else print “(” 4.h> #include <malloc. for k ←i to j − 1 9. s=(int **)malloc(n*sizeof(int *)).j. do j ←i + l − 1 7.h> #include <stdlib. i. void print_pattern(int **.i<=n. } void matrix_chain(int *p. p=(int*)malloc((n+1)*sizeof(int)). Dept.int . i ] ← 0 4. j ] ← k 13.i. i. k++) { q=c[i][k]+c[k+1][j]+p[i-1]*p[k]*p[j].s[i][j]). Assistant Professor.i<=n-l+1. s[i][j]=k.j<=n.n).i<=n. Dept.k<j. print_pattern(s.l). if(q<c[i][j]) { c[i][j]=q.i. } void print_pattern(int **s.c[i][j]).i). } printf("\n Order of matrix chain Multiplication"). printf("}"). } else { printf("{"). c[i][j]=INT_MAX.l<=n. for(l=2.int j) { if (i==j) { printf("A%d".j++) printf("%ld\t".1. printf("\n"). for(k=i. } } } } printf("\n cost Matrix for matrix chain").i<=n. for(i=1.int i. for(i=1. CSE.i++) c[i][i]=0. print_pattern(s.i++) { for(j=1. print_pattern(s. } } Output: Subject Teacher: Anuj Kumar Jain. ITM Universe Vadodara .l++) { printf("%d \t". for(i=1.i++) { j=i+l-1. printf("\n").s[i][j]+1.j). Assistant Professor. CSE. Dept.Subject Teacher: Anuj Kumar Jain. ITM Universe Vadodara . then both elemets to be comapred falls outside the table. using only coins of denominations 1 to i. do c[i. in which case . c[i. have value units. First we may choose not to use any coins of denomination i.n) 1.j]< ←1+c[i. We suppose as is usual taht each . do for j← 1 to N 5. for i← 1 to n 4. there remains to be paid an amount of j-di units. if we have to make change for 8 units.j-d[i]] 12. even though this is now permitted. using as few coins as possible. Lets a coin of denomination i.j] 11. else if i==1 8. to filll in the table.j] ←c[i-1. In this case we set to to indicate that it is impossible to pay an amount j using only coins to type 1.j] ←1+c[1. . we live where there are coins for 1.j] ←1+c[i. then c[i.d. the same is true when . note first that is zero for every value of i. else if j<d[i] 10. after this initialization.4 and 6 units. However it is clearly possible to do better that this we can give the customer his change using just two 4-unit coins.j-d[1]] 9.j-d[i]] 14. Suppose the currency we are using has available coins of different denominations. the solution of the instance is therfore given by if all we want to kanow is how many coins are needed. it is convenient to think of such elements as having the value of .j] ←c[i-1. in this table will be the minimum number of coins required to pay an amount of j units. to pay an amount j using coins of denominations 1 to i. we have in general two choices. ITM Universe Vadodara . Experiment No: 7 Aim: Implementation of making a change problem using dynamic programming Theory For example suppose. To solve this problem by dynamic programming. in general therfore when i=1 one of the element to be compared falls outside the table. Assistant Professor. or column by column from top to bottom. Although the greedy algorithm does not find this solution.j] 13. if i=1 and . the greedy algorithm will propose doing so using one 6-unit coin and two 1- unit coins. CSE. so c . return c[n. Algorithm: MAKING _CHANGE(N. For the time being we shall also supose taht we ahve an unlimited supply of coins of each denomination.j] ←∞ 7. we choose which ever alternative is better. Finally suppose we have to give the customer coins of each being we shall also suppose we have that we have to give the customer coins worth units.0] ←0 3. sicne we want to minimize of coins used. it is easily obtained using dynamic programming. to pay this take coins. else then c[i. we setup a table with onw raw for each availabe denomination and one column for amount form 0 units to N units. we may choose to use at least one coins of denomination. the table can be filled either row form left to right. Alternatively. else if c[i-1. then c[i. for a total of three coins. . then c[i.N] Source Code: // Coin change problem Subject Teacher: Anuj Kumar Jain. for i← 1 to n 2. if i=1 and j<d[i] 6. Dept. h> int n. else if (i==1) c[1][j]=c[1][j-d[1]]+1.#include <stdio.coin).1.h> #include <malloc.n.int d[]) { if (j<=0) return .ch).i<=n.int n.i.i++) c[i]=(int *)malloc((m+1)*sizeof(int *)). for(i=1.i<=n. ITM Universe Vadodara . scanf("%d".d). int .m.j++) printf("%d \t ".int).h> #include <limits. else if (c[i-1][j]<c[i][j-d[i]]+1) c[i][j]=c[i-1][j].ch.int *). void main() { int *d. void print_pattern(int **. print_pattern(c.d).m.h> #include <time. printf("\n"). else if (i==1 && j>=d[1]) { print_pattern(c. } printf("Selected Coins:").j<=m. for(i=1. else c[i][j]=c[i][j-d[i]]+1.int i.i++) c[i][0]=0. Subject Teacher: Anuj Kumar Jain. int coin_change(int *.int .j<=m. else if(j< d[i]) c[i][j]=c[i-1][j].j++) if(i==1 && j<d[i]) c[i][j]=INT_MAX.int j.c[i][j]).i<=m.j-d[1]. for(i=0.int m) { int **c. d=(int*)malloc((n+1)*sizeof(int)).n. Assistant Professor.&d[i]).i<=n. } void print_pattern(int **c. for(i=1. return c[n][m].h> #include <stdlib.i++) scanf("%d".i++) for(j=1. printf("Enter the number of type of coin:").&n). for(i=1.j. scanf("%d". c=(int **)malloc((n+1)*sizeof(int *)).int .i++) { for(j=0. printf("\n Enter the Change Value"). int coin. printf("No of Selected Coins are %d \n".i.i<=n.&ch). coin=coin_change(d. } int coin_change(int *d. CSE. Dept. CSE.d).i. ITM Universe Vadodara . Dept.j-d[i]. Assistant Professor. } Output: Conclusion: Now we can find minimum number of coin required to make the change of particular amount.d[i]). } else print_pattern(c.i-1.d).d[1]).j. printf("%d \n". Complexity of making change is . Subject Teacher: Anuj Kumar Jain. printf("%d \n". } else if(c[i][j]!=c[i-1][j] && j>=d[i]) { print_pattern(c. Dept.u. printf("enter the weight of the item").n.i++) { loc=i.h> #include <time. float *x.0.i<=n.n object I has a positive weight wi and the positive value vi. In this case object I contribute xiwi to the total weight in the knapsack and xivi to the value of the load. where 0<=xi<=1. // find the ratio of p[i]/w[i].j<=n. w=(int *)malloc(n*sizeof(int)).*r. Assistant Professor.profit=0. in this problem we assumed that the objects can be broken into smaller pieces. for(j=i+1. void main() { int *w.i<n. ITM Universe Vadodara .h> #include <string. r=(float *)malloc(n*sizeof(float)). while respecting the capacity constraint. printf("enter the profit of the item"). min=r[i]. min=r[j]..i++) scanf("%d".&n).i<=n. for(i=1.m. CSE. Experiment No: 8 Aim: Implementation of a knapsack problem using greedy algorithm Theory: We are given n objects and a knapsack.&p[i]).h> void swap(int *.float *).*p.loc. //sort the data according to ratio.int *). so we mau decide to carry onlty a fraction xi of object i.j++) if(min<r[j]) { loc=j.i<=n. scanf("%d".i.…. scanf("%d". Subject Teacher: Anuj Kumar Jain. the knapsack can carry a weight not exceeding W. for(i=1. for(i=1.h> #include <limits.min. void swap_float(float *. x=(float *)malloc(n*sizeof(float)).&w[i]).2. printf("\n Enter the no of item"). the problem can be stated as follows: Algorithm: Source Code: #include <stdio.i++) scanf("%d". p=(int *)malloc(n*sizeof(int)). our aim is to fill the knapsack in a way that maximizes the value of included objects. printf("Enter the Capacity of Knapsack").i++) r[i]=(float)p[i]/w[i].j. for i=1.&m).h> #include <stdlib. for(i=1. swap(&p[i]. } } u=m. temp=*x.i++) if(w[i]>u) break. } if(i<=n) x[i]=(float)u/w[i]. } Output: Subject Teacher: Anuj Kumar Jain. Dept. } void swap_float(float *x. for(i=1.&r[loc]). for(i=1. u=u-w[i]. for(i=1.i<=n.i++) profit=profit+x[i]*p[i]. *x=*y.int *y) { int temp. for(i=1. *y=temp. else { x[i]=1.x[i]).profit ). Assistant Professor. printf("\n Profit of Knapsack=%f". CSE. ITM Universe Vadodara .&p[loc]).i++) x[i]=0.i<=n.i<=n. swap(&w[i]. } void swap(int *x.i<=n.0. *y=temp.&w[loc]). } if(i!=loc) { swap_float(&r[i]. *x=*y.0.i++) printf("%f \t". temp=*x.float *y) { float temp. Thus it is always safe to make greedy choice. two other sets get accumulated among this one set contains the candidates that have been already considered and chosen while the other set contains the candidates that have been considered but rejected. Write any two characteristics of Greedy Algorithm? 1) To solve a problem in an optimal way construct the solution from given set of candidates. On each step. n is the number of objects and for each object i and are the weight and profit of object respectively. Develop a recursive algorithm and convert into iterative algorithm. Develop a recursive solution.Conclusion: We Have successfully implement the knapsack Problem Question & Answer: Q 1. Assistant Professor. What are the constraints of knapsack problem? To maximize: The constraint is : Where m is the bag capacity.it cant be changed) Q 4. ITM Universe Vadodara . Prove that at any stage of recursion one of the optimal choices is greedy choice. What are the steps required to develop a greedy algorithm? Determine the optimal substructure of the problem. 2) As the algorithm proceeds. CSE. Show that all but one of the sub problems induced by having made the greedy choice are empty. Q 3.until a complete solution is reached.the choice must be • Feasible(satisfy problem constraints) • Locally optimal(best local choice among all feasible choices available on that step) • Irrevocable(once made. What is the Greedy approach? The method suggests constructing solution through sequence of steps. Q 2. Subject Teacher: Anuj Kumar Jain. Dept.each expanding partially constructed solution obtained so far. It maintains several additional data structures with each vertex in the graph. the path in the breadth-first tree from s to v corresponds to a “shortest path” from s to v in G. at which time it becomes nonwhite. To keep track of progress. Breadth-first search constructs a breadth-first tree. When all of v’s edges have been explored. Unlike breadth-first search. In depth-first search. then π[u] = NIL. the vertex v and the edge (u. breadth-first search systematically explores the edges of G to “discover” every vertex that is reachable from s.3 Depth-first search The strategy followed by depth-first search is. Experiment No: 9 Aim: Implementation of Graph and Searching (DFS and BFS). Gray vertices may have some adjacent white vertices. A vertex is discovered the first time it is encountered during the search. The distance from the source s to vertex u computed by the algorithm is stored in d[u]. the algorithm discovers all vertices at distance k from s before discovering any vertices at distance k + 1. E) is represented using adjacency lists. Breadth-first search is so named because it expands the frontier between discovered and undiscovered vertices uniformly across the breadth of the frontier. but breadth-first search distinguishes between them to ensure that the search proceeds in a breadth-first manner. Whenever a white vertex v is discovered in the course of scanning the adjacency list of an already discovered vertex u. Assistant Professor. Gray and black vertices. gray. This process continues until we have discovered all the vertices that are reachable from the original source vertex. which is the source vertex s. initially containing only its root. because the search may be repeated from multiple sources. whose predecessor subgraph forms a tree. breadth-first search colors each vertex white. it has at most one parent. v) ∈ E and vertex u is black. that is. if u = s or u has not been discovered). All vertices start out white and may later become gray and then black. as its name implies. and the predecessor of u is stored in the variable π[u]. Since a vertex is discovered at most once. where Subject Teacher: Anuj Kumar Jain. That is. It computes the distance (smallest number of edges) from s to each reachable vertex. Given a graph G = (V. then u is an ancestor of v and v is a descendant of u. If any undiscovered vertices remain.2 The predecessor subgraph of a depth-first search is therefore defined slightly differently from that of a breadth-first search: we let Gπ = (V. or black. depth-first search records this event by setting v’s predecessor field π[v] to u. 22. E) and a distinguished source vertex s. that is. For any vertex v reachable from s. It also produces a “breadth-first tree” with root s that contains all reachable vertices. The breadth-first-search procedure BFS below assumes that the input graph G = (V. This entire process is repeated until all vertices are discovered. The algorithm works on both directed and undirected graphs.2) and Dijkstra’s single-source shortest-paths algorithm (Section 24. have been discovered. If u has no predecessor (for example. all vertices adjacent to black vertices have been discovered. therefore. the search “backtracks” to explore edges leaving the vertex from which v was discovered. a path containing the smallest number of edges.3) use ideas similar to those in breadth-first search. If (u. Theory: Breadth-first search Breadth-first search is one of the simplest algorithms for searching a graph and the archetype for many important graph algorithms. whenever a vertex v is discovered during a scan of the adjacency list of an already discovered vertex u. then one of them is selected as a new source and the search is repeated from that source. The color of each vertex u ∈ V is stored in the variable color[u]. Ancestor and descendant relationships in the breadth-first tree are defined relative to the root s as usual: if u is on a path in the tree from the root s to vertex v. to search “deeper” in the graph whenever possible. CSE. the predecessor subgraph produced by a depth-first search may be composed of several trees. Eπ ). ITM Universe Vadodara . then vertex v is either gray or black. v) are added to the tree. We say that u is the predecessor or parent of v in the breadth-first tree. Prim’s minimum-spanningtree algorithm (Section 23. they represent the frontier between discovered and undiscovered vertices. As in breadth-first search. edges are explored out of the most recently discovered vertex v that still has unexplored edges leaving it. Dept. π[u]← NIL 5. This technique guarantees that each vertex ends up in exactly one depth-first tree. ENQUEUE(Q. These timestamps are integers between 1 and 2 |V|. Besides creating a depth-first forest. do if color[v] = WHITE 6. s) 1. (22. do if color[u] = WHITE 7. d[s] ← 0 7. and is blackened when it is finished. d[v] ← d[u] + 1 16. d[u] ← time 4. is grayed when it is discovered in the search. depth-first search also timestamps each vertex. π[u]← NIL 4. color[s] ← GRAY 6. CSE. π[v]← u 17. that is. Dept. do u ← DEQUEUE(Q) 12. since there is one discovery event and one finishing event for each of the |V| vertices. for each ∈ 13. d[u]←∞ 4. v). These timestamps are used in many graph algorithms and are generally helpful in reasoning about the behavior of depth-first search. The procedure DFS below records when it discovers vertex u in the variable d[u] and when it finishes vertex u in the variable f [u]. time ← 0 5. The variable time is a global variable that we use for timestamping. for each vertex ∈ 2. Algorithm: BFS(G. do if color[v] = WHITE 14. when its adjacency list has been examined completely. for each ∈ //Explore edge (u. For every vertex u. and BLACK thereafter. Assistant Professor. color[u] ← BLACK DFS(G) 1. ENQUEUE(Q. π[s] ← NIL 8. while 11. The following pseudocode is the basic depth-first-search algorithm. As in breadth-first search. do color[u] ← WHITE 3. for each vertex ∈ 6. color[u] ← GRAY // White vertex u has just been discovered. The input graph G may be undirected or directed. do color[u] ← WHITE 3. v) : v ∈ V and π[v] _= NIL} . d[u] < f [u] . Each vertex v has two timestamps: the first timestamp d[v] records when v is first discovered (and grayed). so that these trees are disjoint. and the second timestamp f [v] records when the search finishes examining v’s adjacency list (and blackens v). ITM Universe Vadodara . The predecessor subgraph of a depth-first search forms a depth-first forest composed of several depth- first trees. time ← time+1 3. The edges in Eπ are called tree edges.2) Vertex u is WHITE before time d[u]. GRAY between time d[u] and time f [u]. then DFS-VISIT(u) DFS-VISIT(u) 1. then π[v] ← u Subject Teacher: Anuj Kumar Jain. s) 10.Eπ = {(π[v]. 5. then color[v] ← GRAY 15. Each vertex is initially white. vertices are colored during the search to indicate their state. for each vertex ∈ 2. 9. 2. v) 18. choice. printf("4.i++) visited[i]=FALSE. it is finished.i<=n. printf("Enter the choice"). int n. void dfs(int v). dfs_rec(v). 7. DFS using stack\n").&v). Adjacency Matrix\n").i++) visited[i]=FALSE.i<=n. DFS-VISIT(v) 8. 9. DFS using Recurssion\n"). scanf("%d". for(i=1. display(). case 5: printf("\n Enter starting node for breath first search"). Dept. scanf("%d". scanf("%d".&v). void bfs(int v). printf("3. while(1) { printf("1.h> #include <stdlib. create_graph().i<=n. ITM Universe Vadodara .i++) Subject Teacher: Anuj Kumar Jain. for(i=1. f [u] ← time ← time+1 Source Code: include <stdio. int main() { int i. CSE. void dfs_rec(int v). scanf("%d". void display(). Return\n"). for(i=1. Assistant Professor. case 2: printf("\n Enter starting node for depth first search"). break.h> #define Max 20 #define FALSE 0 #define TRUE 1 int adj[Max][Max]. case 3: printf("\n Enter starting node for depth first search"). switch(choice) { case 1: printf("\n Print the Adjacency Matrix\n"). break. void create_graph().&v).&choice). printf("2. color[u] ← BLACK // Blacken u. int visited[Max]. dfs(v). break.v. for(i=1. printf("Enter number of Vertices").&n). } Subject Teacher: Anuj Kumar Jain. } else { adj[origin][destination]=1.i<=n. break. ITM Universe Vadodara . Dept.i).destination. break.j++) { printf("%d". } printf("\n").&origin. max_edge=n*(n-1)/2. i--.max_edge. visited[i]=FALSE. if(origin==0 && destination==0) break. } void create_graph() { int i.adj[i][j]). scanf("%d %d".j. if(origin>n || destination>n|| origin<=0||destination<=0) { printf("Invalid edge \n"). Assistant Professor. } } return 0. for(i=1. CSE. case 4: return .origin.i<=max_edge.i++) { printf("Enter Edge %d(0 0 to quit)".&destination). } void display() { int i. default :printf("\n ERROR! Wrong Chice"). scanf("%d". bfs(v).j<=n.i++) { for(j=1. } } printf("Hello graph Created"). for(i=1.front.stack[Max]. top--. for(i=n. visited[v]=TRUE. printf("%d". top++.j. visited[pop]=TRUE.pop). while(top>0) { pop=stack[top].i>=1. CSE. Subject Teacher: Anuj Kumar Jain. stack[top]=v.i--) { if((adj[pop][i]==1)&& (visited[i]==FALSE)) { top++.pop. rear++. front=rear=-1.t. if(visited[pop]==FALSE) { printf("%d".rear. stack[top]=i. front++. printf("%d". visited[v]=TRUE. while(front<=rear) { v=que[front].i<=n. ITM Universe Vadodara . } void dfs(int v) { int i. Dept.i++) if((adj[v][i]==1) && (visited[i]==FALSE)) dfs_rec(i). } } } } void bfs(int v) { int i.v). Assistant Professor. que[rear]=v.top=0. } void dfs_rec(int v) { int i.v). } else continue. int que[20]. i++) { if((adj[v][i]==1)&&visited[i]==FALSE) { printf("%d". } } } } Output: Conclusion: We have successfully implemented the BFS and DFS Subject Teacher: Anuj Kumar Jain.front++. Assistant Professor. Dept. ITM Universe Vadodara . visited[i]=TRUE. que[rear]=i. rear++.i<=n. CSE. for(i=1.i). Pick the vertex with minimum key value and not already included in MST (not in mstSET). The vertices included in MST are shown in green color. pick the minimum weight edge from the cut and include this vertex to MST Set (the set that contains already included vertices). the key value for these vertices indicate the minimum weight edges connecting them to the set of vertices included in MST. INF. INF} where INF indicates infinite. Dept. A group of edges that connects two set of vertices in a graph is called cut in graph theory. After picking the edge. The key value of vertex 2 becomes 8. INF. let vertex 7 is picked. How does Prim’s Algorithm Work? The idea behind Prim’s algorithm is simple. So mstSet becomes {0}. The first set contains the vertices already included in the MST. ITM Universe Vadodara . So. it considers all the edges that connect the two sets. Now pick the vertex with minimum key value. CSE. 7}. Experiment No: 10 Aim: Implement prim’s algorithm Theory: Prim’s algorithm is a Greedy algorithm. Assistant Professor. 1}. And they must be connected with the minimum weight edge to make it a Minimum Spanning Tree. We can either pick vertex 7 or vertex 2. Subject Teacher: Anuj Kumar Jain. at every step of Prim’s algorithm. update key values of adjacent vertices. Update the key values of adjacent vertices of 1. So the two disjoint subsets of vertices must be connected to make a Spanning Tree. INF. we find a cut (of two sets. At every step. The vertex 0 is picked. Update the key values of adjacent vertices of 7. The vertex 1 is picked and added to mstSet. It starts with an empty spanning tree. the other set contains the vertices not yet included. only the vertices with finite key values are shown. So mstSet now becomes {0. a spanning tree means all vertices must be connected. and picks the minimum weight edge from these edges. The idea is to maintain two sets of vertices. After including to mstSet. 1. include it in mstSet. one contains the vertices already included in MST and other contains rest of the vertices). INF. it moves the other endpoint of the edge to the set containing MST. The idea of using key values is to pick the minimum weight edge from cut. Following subgraph shows vertices and their key values. The key values of 1 and 7 are updated as 4 and 8. Adjacent vertices of 0 are 1 and 7. The key values are used only for vertices which are not yet included in MST. INF. Let us understand with the following example: The set mstSet is initially empty and keys assigned to vertices are {0. So mstSet now becomes {0. Pick the vertex with minimum key value and not already included in MST (not in mstSET). INF. The key value of vertex 6 and 8 becomes finite (7 and 1 respectively). v) Source Code: #include <stdio. Algorithm: MST-PRIM(G.h> #include <stdlib.w. do key[u]←∞ 3. 7. So mstSet now becomes {0. 10 then π[v]← u 11. key[r] ← 0 5. Update the key values of adjacent vertices of 6. Q ← V[G] 6. We repeat the above steps until mstSet includes all vertices of given graph. Dept. we get the following graph. The key value of vertex 5 and 8 are updated. for each ∈ 2. 7 do u ← EXTRACT-MIN(Q) 8. Finally. v) < key[v] 10.h> #define Max 20 #define TEMP 0 #define PERM 1 #define FALSE 0 #define TRUE 1 #define infinity 99999 struct node1 { Subject Teacher: Anuj Kumar Jain. ITM Universe Vadodara . π[u]← NIL 4. 11 key[v] ← w(u. 9 do i ∈ and w(u. 1. CSE. Vertex 6 is picked. r) 1.Pick the vertex with minimum key value and not already included in MST (not in mstSET). 6}. Assistant Professor. 8 for each ∈ 9. while 7. &destination). int predecessor. printf("Edge to be included in spanning tree are \n"). void create_graph().count.destination. printf("Adjacency Matrix"). struct edge1 { int u.&wt_tree).v). int adj[Max][Max].u). int path[Max]. ITM Universe Vadodara . scanf("%d". scanf("%d %d".origin.&n).max_edge.i<=max_edge.int *weight). int dist.wt_tree). int maketree(struct edge1 tree[Max]. int wt_tree=0.i++) { printf("Enter Edge %d(0 0 to quit)". } void create_graph() { int i. Assistant Professor.i++) { printf("%d->".&origin. } printf("weight of the minimum spanning tree is %d\n".j. printf("%d\n". }tree[Max].i). int all_perm(struct node1 state[Max]). CSE. create_graph(). for(i=1. void main() { int i. //struct egde1 Tree[20]. int status.i<=count. count=maketree(tree.tree[i]. }. void display(). int v. if(origin==0 && destination==0) Subject Teacher: Anuj Kumar Jain. Dept. max_edge=n*(n-1)/2. int n. for(i=1. printf("Enter number of Vertices"). display().wt.tree[i]. exit(1).int *weight) { struct node1 state[Max]. } } void display() { int i. } else { adj[origin][destination]=wt.current. i--. scanf("%d". for(i=1.v1.j<=n.i++) { state[i].k. } printf("\n"). } state[1]. ITM Universe Vadodara . Subject Teacher: Anuj Kumar Jain.newdist.j. int i.status=TEMP.&wt).count.i<=n. state[i].j++) { printf("%d". int u1.min. break. CSE. adj[destination][origin]=wt. printf("Enetr the weight for this edge"). int m.dist=0. *weight=0.i<=n. state[1].predecessor=0.i++) { for(j=1.adj[i][j]). for(i=1.dist=infinity. } } int maketree(struct edge1 tree[Max]. Dept. Assistant Professor. state[i]. if(origin>n || destination>n|| origin<=0||destination<=0) { printf("Invalid edge \n").predecessor=0. } } if(i<n-1) { printf("Spanning tree is not possible"). while(all_perm(state)!=TRUE) { for(i=1. } } } min=infinity. count++. state[1].status==TEMP) return FALSE.dist<min) { min=state[i]. } int all_perm(struct node1 state[Max]) { int i. u1=state[current]. *weight=*weight+adj[u1][v1].u=u1.predecessor. v1=current. current=1. tree[count].i<=n.status==TEMP && state[i].i++) if(state[i]. CSE. tree[count].status=TEMP. state[i]. current=i. } Output: Subject Teacher: Anuj Kumar Jain.status=PERM.dist) { state[i].dist.status==TEMP) { if(adj[current][i]<state[i].i<=n. for(i=1. Assistant Professor. ITM Universe Vadodara . count=0.dist=adj[current][i].v=v1. for(i=1.i++) { if(state[i]. } } } state[current].i++) { if(adj[current][i]>0 && state[i].predecessor=current. return TRUE. Dept.i<=n. It works by attaching to a previously constructed subtree a vertex to the vertices already in the tree. What are the applications of Minimum Spanning Tree? max bottleneck paths LDPC codes for error correction image registration with Renyi entropy learning salient features for real-time face verification reducing data storage in sequencing amino acids in a protein model locality of particle interactions in turbulent fluid flows autoconfig protocol for Ethernet bridging to avoid cycles in a network Q 4. Define Prim's Algorithm Prim's algorithm is a greedy algorithm for constructing a minimum spanning tree of a weighted connected graph. connected and undirected graph is a spanning tree with weight less than or equal to the weight of every other spanning tree. The weight of a spanning tree is the sum of weights given to each edge of the spanning tree.Conclusion: We have successfully implemented the prim’s Algorithm Question & Answer: Q 1. Dept. A minimum spanning tree (MST) or minimum weight spanning tree for a weighted. CSE. Assistant Professor. Q 2. Q 3. A single graph can have many different spanning trees. Subject Teacher: Anuj Kumar Jain. How many edges does a minimum spanning tree has? A minimum spanning tree has (V – 1) edges where V is the number of vertices in the given graph. What is Minimum Spanning Tree? Given a connected and undirected graph. ITM Universe Vadodara . a spanning tree of that graph is a subgraph that is a tree and connects all the vertices together. c) Repeat step#2 until there are (V-1) edges in the spanning tree. Check if it forms a cycle with the spanning tree formed so far. Pick edge 7-6: No cycle is formed. The algorithm is a Greedy Algorithm. The Greedy Choice is to pick the smallest weight edge that does not cause a cycle in the MST constructed so far. Subject Teacher: Anuj Kumar Jain. After sorting: Weight Src Dest Weight Src Dest 1 7 6 7 7 8 2 8 2 8 0 7 3 6 5 8 1 2 4 0 1 9 3 4 4 2 5 10 5 4 6 8 6 11 1 7 7 2 3 14 3 5 Now pick all edges one by one from sorted list of edges 1. include this edge. 2. Let us understand it with an example: Consider the below input graph. Dept. ITM Universe Vadodara . CSE. The graph contains 9 vertices and 14 edges. So. Experiment No: 11 Aim: Implement kruskal’s algorithm Theory: Below are the steps for finding MST using Kruskal’s algorithm a) Sort all the edges in non-decreasing order of their weight. include it. include it. If cycle is not formed. the minimum spanning tree formed will be having (9 – 1) = 8 edges. Pick edge 8-2: No cycle is formed. discard it. Pick edge 6-5: No cycle is formed. 3. Else. include it. b) Pick the smallest edge. Assistant Professor. discard it. 8. 10. Since the number of edges included equals (V – 1). Pick edge 2-5: No cycle is formed. ITM Universe Vadodara . include it. Assistant Professor. include it 7. CSE. include it. Pick edge 1-2: Since including this edge results in cycle. Pick edge 3-4: No cycle is formed. Pick edge 7-8: Since including this edge results in cycle. Pick edge 0-1: No cycle is formed. a. discard it. Dept. discard it. Pick edge 8-6: Since including this edge results in cycle. 5. 12. Pick edge 0-7: No cycle is formed. include it.4. include it. Pick edge 2-3: No cycle is formed. the algorithm stops here. Algorithm: Subject Teacher: Anuj Kumar Jain. 9. 11. i++) { printf("%d->"tree[i]. int n.h> #define Max 20 void create_garph(). kruskal() { int i. create_graph(). CSE. printf("Edge to be included in spanning tree are \n"). make_tree(). int count=0.wt. return A Source Code: #include<stdio.&n). } void create_graph() { int i. int w. do MAKE-SET(v) 4. printf("%d\n"tree[i]. max_edge=n*(n-1)/2. scanf("%d". for each vertex v ∈ V[G] 3. struct edge *link. void insert_pque(int i.i).w) 1.i++) { printf("Enter edge %d (0 0 to quit)".MST-KRUSKAL(G. sort the edges of E into nondecreasing order by weight w 5. then A ← A ∪ {(u. Assistant Professor. printf("Enter the number of node"). Dept. int father[Max].v).&destination). } printf("weight of the minimum spanning tree is %d\n".max_edge.destination. void make_tree(). int v. v) ∈ E. A←∅ 2. ITM Universe Vadodara .i<=max_edge.int j.h> #include<stdlib.int wt). do if FIND-SET(u) != FIND-SET(v) 7. struct edge{ int u. int wt_tree=0. for(i=1. for each edge (u.u). } *front1=NULL. scanf("%d %d". struct edge *del_pque(). Subject Teacher: Anuj Kumar Jain.int j. void insert_tree(int i.origin. taken in nondecreasing order by weight 6. for(i=1.&origin.int wt). UNION(u. v)} 8. v) 9. struct edge tree[max].i<=count.wt_tree). } } } void insert_tree(int i.tmp->weight). exit(1).wt). tree[count]. node1=father[node1]. Assistant Professor. } else insert_pque(origin.w.root_n1. while(count<n-1) { tmp=del_pque() node1=tmp->u. ITM Universe Vadodara .root_n2.n2=%d".int j.node2). tree[count]. if(origin==0&&destination==0) break.w=wt. while(node1>0) { root_n1=node1.int wt) { printf("this edge inserted in the spanning tree\n"). father[root_n2]=root_n1.node2. tree[count]. if(origin>n || destination>n|| origin<=0||destination<=0) { printf("Invalid edge \n"). printf("enter the weight for this edge").&wt).destination.tmp->v. } if(root_n2!=root_n1) { insert_tree(tmp->u. node2=tmp->v. i--.node1. wt_tree=wt_tree+tmp. } while(node2>0) { root_n2=node2. } if(i<n-1) { printf("Spanning tree is not possible"). node2=father[node2]. } Subject Teacher: Anuj Kumar Jain. int node1. printf("\n n1=%d.u=i. count++. Dept. scanf("%d". CSE. } } void make_tree() { struct edge *tmp.v=j. temp->link=q->link. if(q->link==NULL) tmp->link=NULL. Define Kruskal's Algorithm Subject Teacher: Anuj Kumar Jain. return tmp. } Output: Conclusion: We have successfully implemented the kruskal Algorithm Question & Answer: Q 1.void insert_pque(int i. } } struct edge *del_pque() { struct edge *tmp. Assistant Professor.where the weight of a tree is defined as the sum of the weights on all its edges Q 3.tmp->u. Q 2. tmp=front1. tmp->u=i. } else { q=fornt1..tmp->v. tmp->w=wt.tmp->w). a tree) that contains all the vertices of the graph.int wt) { struct edge *tmp. CSE.e. tmp->v=j. Define Minimum Spanning Tree A minimum spanning tree of a weighted connected graph is its spanning tree of the smallest weight .*q. tmp=(struct edge *)malloc (sizeof(stuct edge)). while (q->link!=NULL && q->link->w<=tmp->w) q=q->link. ITM Universe Vadodara . front1=tmp. if(fornt1==NULL || tmp->w<fornt1->w) { tmp->link=fornt1. Define Spanning Tree A Spanning Tree of a connected graph is its connected acyclic subgraph(i. q->link=tmp. Dept. front1=form1->link. printf("Edge processed is %d->%d%d\n ".int j. 1 edges for which the sum of the edge weights is the smallest. Assistant Professor.E) as an acyclic subgraph with | V| . CSE. Subject Teacher: Anuj Kumar Jain. ITM Universe Vadodara . Kruskal's algorithm looks at a minimum spanning tree for a weighted connected graph G =(V. Dept. When biologists find a new sequences. Since the pattern and text have symmetric roles. Experiment No: 12 Aim: Implement LCS problem. the order of the letters in the subsequence). do c[0. else if c[i − 1. it just gives us a way of making the solution more efficient once we have. Dept. To do this. j ] ← c[i − 1.instead it gives the longest prefix of A that's a subsequence of B. these programs want to send the terminal as few characters as possible to cause it to update its display correctly. The dynamic programming idea doesn't tell us how to find this. Let's start with some simple observations about the LCS problem. then c[i. But the longest common subsequence of A and B is not always a prefix of A. 0] ← 0 5. File comparison. for j ← 0 to n 6. Molecular biology. j − 1] + 1 11. do for j ← 1 to n 9. Y ) 1. For slow dial-in terminals. j ] ← 0 7. Many text editors like "emacs" display part of a file on the screen. then c[i. we first need a recursive solution. j − 1] 13. It works by finding a longest common subsequence of the lines of the two files. DNA sequences (genes) can be represented as sequences of four letters ACGT. n ← length[Y ] 3. no two lines cross (the top and bottom endpoints occur in the same order. we can represent a subsequence as a way of writing the two so that If we draw lines connecting the letters in the first string to the corresponding letters in the second. One way of computing how similar two sequences are is to find the length of their longest common subsequence. Assistant Professor. so what it displays is the remaining set of lines that have changed. any line in the subsequence has not been changed. We'll use m to denote the length of A and n to denote the length of B. do c[i. ITM Universe Vadodara . Theory: What if the pattern does not occur in the text? It still makes sense to find the longest subsequence that occurs both in the pattern and in the text. they typically want to know what other sequences it is most similar to. j ] ← c[i − 1. If we have two strings. corresponding to the four submolecules forming DNA. Conversely any set of lines drawn like this. So we want to solve the longest common subsequence problem by dynamic programming. say "nematode knowledge" and "empty bottle". represents a subsequence. m ← length[X] 2. Why might we want to solve the longest common subsequence problem? There are several motivating applications. j ] ≥ c[i. else c[i. The Unix program "diff" is used to compare two different versions of the same file. from now on we won't give them different names but just call them strings A and B. This is the longest common subsequence problem. updating the screen image as the file is changed. It is possible to view the computation of the minimum length sequence of characters needed to update the terminal as being a sort of common subsequence problem (the common subsequence tells you the parts of the display that are already correct and don't need to be changed). j ] ← c[i. for i ← 1 to m 8. Algorithm: LCS-LENGTH(X. for i ← 1 to m 4. b[i. b[i. to determine what changes have been made to the file. j ] ← “&” 12. j − 1] Subject Teacher: Anuj Kumar Jain. In this instance of the problem we should think of each line of a file as being a single complicated character in a string. CSE. do if xi = yj 10. j ] 14. j ] ← “↑” 15. Note that the automata-theoretic method above doesn't solve the problem -. without crossings. Screen redisplay. } void main() { char *x. x=randstring(n). Dept. lcs(x. j ] = “&” 4. j − 1) 5. j − 1) Source Code: #include <stdio. i.int m) { int **c. j ) 1. j ) 8. Assistant Professor.int n. if (length) { randomString = malloc(sizeof(char) * (length +2)). int i.j.n. int n. X. i.**s. X. puts(y).h> void lcs(char *. then PRINT-LCS(b. i − 1. 16. X. } randomString[length] = '\0'. if b[i. b[i.char *. printf("Enter the length of Second string "). printf("Enter the length of first string "). int i. Subject Teacher: Anuj Kumar Jain. then return 3.int . puts(x). if i = 0 or j = 0 2.&m).h> #include <limits.n++) { int key = rand() % (int)(sizeof(charset) -1).j. y=randstring(m). j ] ← “←” 17. i − 1. CSE.m. then PRINT-LCS(b. else PRINT-LCS(b. } void lcs(char *x. } } return randomString. return c and b PRINT-LCS(b. j ] = “↑” 7. ITM Universe Vadodara . int ).h> #include <stdlib.m). void print_lcs(int **. if (randomString) { for (n=1.char *y. *y. randomString[n] = charset[key].y. X. c=(int **)malloc(sizeof(int *)*(n+1)). scanf("%d". s=(int **)malloc(sizeof(int *)*(n+1)).char *.n<=length. char *randomString = NULL.h> #include <time.&n). else if b[i.int . print xi 6.int ). char *randstring(int length) { static char charset[] = "GTAC".n. scanf("%d". j<=m. } else if(c[i-1][j]>=c[i][j-1]) { c[i][j]=c[i-1][j].j<=m. ITM Universe Vadodara .i<=n.x[i]). } void print_lcs(int **s.i<=n. for(j=1.j++) c[0][j]=0.i++) { for(j=1.c[n][m]).x.i++) for(j=1.j-1).i<=n.n.char *x.i<=n.j<=m.j<=m.x. printf("hello").i<=n. if(s[i][j]==1) { print_lcs(s. printf("\n"). for(i=0. CSE. } printf("\n Structure Matrix"). s[i][j]=2. for(i=0. } else if(s[i][j]==2) print_lcs(s.m). } for(i=0. s[i][j]=1. printf("\n Comman String are : ").i-1. } else { c[i][j]=c[i][j-1]+1. printf("%c". printf("\n"). } printf("\n length of LCS=%d".j).j++) printf("%d \t".int i.i++) { for(j=0. int j) { if (i==0 || j==0) return.j-1). s[i]=(int *)malloc(sizeof(int)*(m+1)).x.c[i][j]). for(i=1.i-1.s[i][j]). else print_lcs(s. for(i=1.i. Subject Teacher: Anuj Kumar Jain.x. print_lcs(s. Assistant Professor.i++) c[i][0]=0.j++) { if(x[i]==y[j]) { c[i][j]=c[i-1][j-1]+1. Dept.j++) printf("%d \t". } } printf("Cost Matrix").i++) { c[i]=(int *)malloc(sizeof(int)*(m+1)). s[i][j]=3. } Output: Subject Teacher: Anuj Kumar Jain. CSE. Assistant Professor. Dept. ITM Universe Vadodara . Dept. ITM Universe Vadodara . dynamic programming suggests solving each of the smaller sub problems only once and recording the results in a table from which we can then obtain a solution to the original problem. these subproblems arise from a recurrence relating a solution to a given problem with solutions to its smaller subproblems of the same type. Subject Teacher: Anuj Kumar Jain. Rather than solving overlapping subproblems again and again.Conclusion: We have successfully implemented the longest common subsequence. CSE. 1)Define Dynamic Programming Dynamic programming is a technique for solving problems with overlapping problems. Assistant Professor. Typically. Question & Answer: Q 1.