Thursday, January 14, 2016

Time Complexity of Algorithms:


Quick Sort:

(a) Best Case , Average Case:
    O(n log n)

(b) Worst Case:
    O(n^2)

(c) Recurrence Relation:
     T(n) = T(0) + T(n-1) + ϴ(n)   
        which is equivalent to 
     T(n) = T(n-1) + ϴ(n)


======================================================


Merge Sort:


(a) Best Case , Average Case & Worst Case:
    O(n log n)


(c) Recurrence Relation:
    T(n) = 2T(n/2) + ϴ(n)

Applications of Merge Sort

1) Merge Sort is useful for sorting linked lists in O(nLogn) time. Other nlogn algorithms like Heap Sort, Quick Sort (average case nLogn) cannot be applied to linked lists.
2) Inversion Count Problem
3) Used in External Sorting


Worst Case Comparison: (Merging 2 sorted lists of length ‘m’ and ’n’ is)
     m + n - 1




======================================================


Heap Sort:


(a) Best Case , Average Case & Worst Case:
    - Time Complexity : O(n log n)



Heapify Operation :
    - Time Complexity : O( Log n )


 Height of tree :
    - Time Complexity (Worst Case) : O( Log n )


======================================================


Graph Representation :


(a) Adjacency Matrix:

    - Searching an edge      : O(1)
    - Removing an edge         : O(1)
    - Adding a vertex         : O(V^2)
    - Space Complexity         : O(V^2)   


(b) Adjacency List:

    - Searching an edge              : O(V)
    - Adding a vertex                 : O(1)
    - Space Complexity     (Best Case)    : O(V+E)   
    - Space Complexity     (Worst Case)    : O(V^2)   


=======================================================


Graph Traversal :


(a) BFS :
    - Queue is used for traversal.
    - Space Complexity     : O(V)
    - Time Complexity     : O(|V| + |E|)
   

Applications of DFS:

    - Finding shortest paths & minimum spanning tree for unweighted graph
    - Peer to peer networks
    - Crawlers in Search Engines
    - Social Network WebSites
    - Broadcasting in Networks
    - Detecting cycle in undirected graphs
    - Testing bipartite graphs



(b) DFS :
    - Stack is used for traversal.
    - Space Complexity     : O(V)
    - Time Complexity     : O(|V| + |E|)


Applications of DFS:

    - Finding minimum spanning tree
    - Finding all pairs shortest path tree
    - Detecting cycle in a graph
    - Testing bipartite graphs
    - For Topological Sorting
    - Finding strongly connected components
    - Finding path between 2 given vertices
    - Solving maze problems.


=======================================================


Finding Minimum Cost Spanning Tree:


Prim’s Algorithm:

(a) If graph is represented using Adjacency Matrix:

    - Time Complexity : O(V^2)              [Best Case]


(b) If graph is represented using Adjacency List & with help of binary heap (Min heap):

    - Time Complexity : O(E Log V)      [Worst Case, since E=V^2 therefore O(V^2 Log V) ]




Kruskal’s Algorithm:

    - Time Complexity : O(E Log V)


=======================================================


Q) The minimum number of comparisons required to find the minimum and the maximum of 100 numbers is _________________.

Answer: 147
The minimum number of comparisons required is 3n/2 – 3 for n numbers


=======================================================


Greedy Knapsack Problem:

    - Time Complexity : O(n Log n)

=======================================================



Creating Min Heap:

    - Time Complexity : O(n)

Extracting Min/Max element from Heap:
    - Time Complexity : O(Log n)

Inserting/Deleting Min/Max element to Heap:
    - Time Complexity : O(Log n)




Huffman’s Coding:

    - Time Complexity : O(n Log n)


=======================================================



Dijkstra’s Algorithm:

    - Time Complexity = O(E Log V)




Bellman-Ford algorithm:

Time complexity of Bellman-Ford algorithm is Θ(VE) where V is number of vertices and E is number edges (See this).
If the graph is complete, the value of E becomes Θ(V2). So overall time complexity becomes Θ(V3)


======================================================


DAG or Topological Sorting Algorithm: (Similar to DFS)

    - Time Complexity = O(V + E)


======================================================


Fibonacci Series :

    - Time Complexity = O(2 ^ n)



Fibonacci Series  (Using Dynamic Programming) :

    - Time Complexity = O(n)



======================================================


Matrix Chain Multiplication:

    No. of ways of splitting or parenthesising while multiplying ’n’ matrices :

             =  (2n)! / ( (n+1)! x n! )  , where  ’n’ = No. of matrices - 1
   


    Example : ABCD

    =>    (A|BCD) , (AB|CD), (ABC|D)
    =>    (A|(B|CD)) , (A|(BC|D)), (AB|CD), ((A|BC)|D), ((AB|C)|D)
    =     5 ways of splitting


            => (2 x 3)! / ( (3 + 1)! x 3! )
            = 5 ways of splitting



No. of Sub Problems to solve while multiplying a matrix of size A[1..n] :
   
     = n ( n + 1 ) / 2
    => O (n^2)


No. of Splits:
    = n-1


Time Complexity :
    = O(n x n^2)
    = O(n ^ 3)




Binary Search Trees:
   
    No. of trees : (2n)! / ( (n+1)! x n! )



======================================================


Multistage Graph:

    Time Complexity : O(n^2)
                = O(V^2)
                = O(E)

   
======================================================


Longest Common Subsequence:

(a) Using recursion :
    O(2^(m+n))

(b) Using Dynamic Programming:
    O(mn)


======================================================


0/1 Knapsack / Subset Sum :


a) Using Recursive function (BruteForce method or Enumeration):

    Depth of the recursion tree = O(n)

    Total no. of nodes : O(2^n) ————> Completely full Binary Tree

                = O((2^n) / 2) —————> Almost Complete  or Half full Binary Tree

    Therefore no. of function calls : O( 2^n )

    Time Complexity : O( 2^n )   

   
b) Using Dynamic Problem:

    Time Complexity : O(nw) ,  where n = no. of objects in the knapsack
                                w = weight of the knapsack       

   
======================================================


 Subset Sum :


a) Using Recursive function (BruteForce method or Enumeration):

    Depth of the recursion tree = O(n)

    Total no. of nodes : O(2^n) ————> Completely full Binary Tree

                = O((2^n) / 2) —————> Almost Complete  or Half full Binary Tree

    Therefore no. of function calls : O( 2^n )

    Time Complexity : O( 2^n )   

   
b) Using Dynamic Problem:

    Time Complexity : O(nw) ,  where n = no. of objects in the knapsack
                                w = weight of the knapsack       

   
Example :
    S = { 6, 3, 2, 1}
    W = 5
   
   

======================================================


Travelling Salesman Problem:


a) Without Dynamic Programming / Using Brute Force method or Enumeration:


    Time Complexity :     O(n!)



b) With Dynamic Programming (NP Complete):

    Time Complexity : O(n 2^n) x O(n)
                = O(n^2 x 2^n)



======================================================


All Pair Shortest Path / Floyd Warshall Algorithm:


a) Using Dijkstra’s Algorithm:

    Time Complexity of Dijkstra’s Algorithm  :     O(E Log V)
   
    If Dijkstra’s Algorithm is applied to all vertices
            = O(V) x O(E Log V)
            = O(VE Log V) ——> Sparse Graphs

            = O(V. V^2 Log V)
            = O(V^3 Log V) ——-> Dense Graphs


b) Using Bellman Ford’s Algorithm:

    Time Complexity of Bellman Ford’s Algorithm  :  O(VE)
   
    If Bellman Ford’s Algorithm is applied to all vertices
            = O(V) x O(VE)
            = O(V^2 . E)    ——> Sparse Graphs

            = O(V^2 . V^2)
            = O(V^4)         ——-> Dense Graphs


c) Using Dynamic Algorithm:

    There are n^2 problems represented using the dense matrix for ’n’ vertices.

    Time Complexity :     n . n^2
                = O(n^3)   
       


======================================================



Sunday, January 3, 2016

TOC Quick Notes:


Regular Language Definition:

- The languages that can be represented by Regular Expressions are called Regular Languages
-  The class of languages obtained by applying union, concatenation and kleene star for finitely many times on the basis elements (ϕ, ϵ , {a} ) are called Regular Languages

- The corresponding finite representations of Regular languages are called Regular Expressions.

- The language generated by Right Linear Grammar ( a type of Context free grammar with restrictions ) is Regular Language.




** All the Regular languages can’t be represented by Regular Expressions, so we use the tool called Grammars for representing the languages.



Linear Grammar :

- It is a special type of Context Free Grammar.
- A CFG G = (N, Σ, P, S) is said to be linear if every production rule of G has at most one non-terminal symbol on its righthand side.

 Ex.    S —> aSb ( Linear Grammar )
    S —> AB  ( Non-linear Gramar )


    S —> aA ( Right Linear Grammar )
    S —> Aa ( Left Linear Grammar )


*Note :  A Right linear grammar or Left linear grammar is also called a Regular Grammar.



Testing Regular Languages:
======================
a) If it can be expressed using the Regular Expressions.
b) If it can be represented using Right linear Grammar.




=================================================================


1. Total number of sub strings present in  "GATE" is
A)7       B) 10          C) 11         D)  8

Ans: 11

     (n x (n+1) / 2 ) + 1
     = ((4 * 5)/ 2) + 1
     = 11
    
====================================================================

2. Number of trivial substrings in “GATE2013” are:
A. 37 B. 35 C. 2 D. 36

Ans : C

 {} and "GATE2013"

=====================================================================


3. How many DFA with 4 state can be constructed over alphabet a and b with designated initial state?

Ans:
Initial state is fixed. Now from initial state, transitions for symbols a and b can be to any of the 4 states, so there are 4∗4=24 possibilities. Similarly, from 2nd, 3rd and 4th state, there are 24 possible transitions for each state i.e. we have 2^4∗2^4∗2^4∗2^4=2^16 possible transitions.

Each state also has 2 possibilities of being final state or not, so there are 2^4 possibilities for choosing final states.

So total number of DFAs = 2^16∗2^4=2^20.

========================================================================


Turing Machine:

    - It is the kind of Automata which reads/writes the symbols on the tape and the control head can move left and right.
    - It accepts the Recursively Enumerable Language
    - It does not accept  (Epsilon - e) moves.
    - It may or may not halt (goes into infinite loop)
    - If the string is found in the language then the TM halts at the final state
    - If the string is not found then the TM halts at some other non final state.
    - It can be thought of a FA with 2 stacks.



    Example:
    It may halt if the string in a language is accepted by the TM
    It may halt if the string is not found in the language by the TM
    It may enter infinite loop.



Halting Turing Machine:

    - It always halts whether the string is found or not found in the Language
    - The ™ doesn’t enter into infinite loop.
    - It accepts the Recursive Language



Linear Bound Automata:

    - It is a restricted TM in which the length of the tape is limited only to the input string.
    - It is a halting TM
    - It accepts the Context Sensitive Language


====================================================================

Unrestricted Grammar:

    - It may have Epsilon - e productions
    - The TM doesn’t accept e symbols on the tape.



Context Sensitive Grammar:

    - It doesn’t have Epsilon - e productions
   
====================================================================



Recursively Enumerable Language:

    - Recursively Enumerable languages are not closed under complementation.



Recursive Language:

    - Recursive languages are closed under complementation.




Context Free Languages:

    - Closed under Union, Concatenation and Kleene Closure
    - Not closed under intersection and complementation.




Regular Languages:

    - Closed under Union, Concatenation, Kleene Closure, intersection and complementation.




*Note : If L is a Context free language, L’ is both Recursive Language and RE Language.



====================================================================



Summary of all the Languages:

Recursively Enumerable Language ( ™ )
Recursive Language ( ™ )
Context Sensitive Language   ( LBA )
Context Free Language ( PDA )
Deterministic Context Free Language ( DPDA)
Regular Language ( FA )


- All the languages have Membership Algorithm except the RE language since the ™ may enter into infinite loop.

========================================================================



Saturday, December 26, 2015

Digital Electronics Quick Notes:


Digital Counters are of 2 Types:


(i) Synchronous Counters:

    - A Clock is attached to all the FF’s
    - The input to the FF is a function of output(s) of other FF’s
    - D Type FF is used in Synchronous Counters
    - Its faster than Asynchronous Counters
    - Synchronous counters are also called Shift Counters.


    Types of Shift Counters (Synchronous Counters):

    (i) Ring Counter :
        - The output Q of the last FF is connect to the input of first FF.
        - Mod 2 counter have 2 FF’s , can count 2 states.
        - Mod 3 counter have 3 FF’s , can count 3 states.

        Counting States:
        ———————
        For ’N’ FF’s ,  counting states are ’N’
        Ex:  A ring counter with 4 FF’s can counter 4 states.

       

    (ii) Johnson Counter ( or Twisted Ring Counter ) :
        - The output Q” of the last FF is connected to the input of the first FF.
        - Mod 4 counter have 2 FF’s , can count 4 states.
        - Mod 6 counter have 3 FF’s , can count 6 states.

   
        Counting States:
        ———————
        For ’N’ FF’s ,  counting states are ’2N’
        Ex:  A Johnson counter with 3 FF’s can counter 6 states.



(ii)Asynchronous Counters:

    - Asynchronous counters are also called Ripple Counters.
    - A Clock is attached to the first FF.
    - The output of each FF is given as clock signal to the next FF.
    - T Type FF is used in Asynchronous Counters
    - Its slower than Synchronous Counters due to the propagation delay in the FF’s
    - It can be used as Up Counter or Down Counter.
    - Mod 8 counter have 3 FF’s , can count 8 states.


    Counting States:
    ———————
    For ’N’ FF’s ,  counting states are power(2,N)
    Ex:  An Asynchronous counter with 3 FF’s can counter 8 states.

Sunday, December 13, 2015

Compiler Design Quick Notes:



We have seen that a lexical analyser can identify tokens with the help of regular expressions and pattern rules. But a lexical analyser cannot check the syntax of a given sentence due to the limitations of the regular expressions. Regular expressions cannot check balancing tokens, such as parenthesis. Therefore, this phase uses context-free grammar (CFG), which is recognised by push-down automata.

CFG, on the other hand, is a superset of Regular Grammar, as depicted below:






It implies that every Regular Grammar is also context-free, but there exists some problems, which are beyond the scope of Regular Grammar. CFG is a helpful tool in describing the syntax of programming languages.
Context-Free Grammar

In this section, we will first see the definition of context-free grammar and introduce terminologies used in parsing technology.

A context-free grammar has four components:

    A set of non-terminals (V). Non-terminals are syntactic variables that denote sets of strings. The non-terminals define sets of strings that help define the language generated by the grammar.

    A set of tokens, known as terminal symbols (Σ). Terminals are the basic symbols from which strings are formed.

    A set of productions (P). The productions of a grammar specify the manner in which the terminals and non-terminals can be combined to form strings. Each production consists of a non-terminal called the left side of the production, an arrow, and a sequence of tokens and/or on- terminals, called the right side of the production.

    One of the non-terminals is designated as the start symbol (S); from where the production begins.

The strings are derived from the start symbol by repeatedly replacing a non-terminal (initially the start symbol) by the right side of a production, for that non-terminal.









Top Down Parsers:
===============


a) Recursive Descent Parser:
    - There is a recursive function for every Variable.
    - Terminals have a match function



b) LL(1) Parser:

    - Left Recursive grammars are not allowed
    - Ambiguous grammars are not allowed
    - Productions are atomic (i.e not more than one production is allowed in the table rows)
    - First and Follow techniques are used to place productions in the rows of the (Variables , Terminals) table.



Shift-Reduce Parsing
==================

Shift-reduce parsing uses two unique steps for bottom-up parsing. These steps are known as shift-step and reduce-step.

    Shift step: The shift step refers to the advancement of the input pointer to the next input symbol, which is called the shifted symbol. This symbol is pushed onto the stack. The shifted symbol is treated as a single node of the parse tree.

    Reduce step : When the parser finds a complete grammar rule (RHS) and replaces it to (LHS), it is known as reduce-step. This occurs when the top of the stack contains a handle. To reduce, a POP function is performed on the stack which pops off the handle and replaces it with LHS non-terminal symbol.





Bottom Up Parsers:
================

a) Operator Precedence Parser:

    - Can accept Ambiguous grammars
    - Productions have to be in the form of Operand-operator-Operand
    - Operation relations table and Operation function table are used for Operator precedence and associativity.
    - Operation relations table has space complexity of O(n^2)
    - Operation functions table has space complexity of O(2n)



b) SR Parsers: (Shift Reduce Parsers : Stack is used)

    - All the parsers differ only by their parsing table entries.
    - All the productions uses items i,e (.) in the beginning of the RHS productions.
        Ex. S —> .AA
    - All the parsers use Canonical Collection of LR(0) items (i.e S’ -> .S production)
    - Productions are grouped to represent “Shift” (Transition Production) and “Reduce” (Completed Productions) states.
    - SR (Shift/Reduce) and RR (Reduce/Reduce) issues may appear in Parsing table.



    Types of LR Parsers:

    LR(0) and SLR(1) rely on the canonical LR(0) items

    (i) LR(0) Parser :
            - The reduce step is present across all the entries in the table.

    (ii) SLR(1) Parser :
            - The reduce step is only present for the First of the Left hand side production entry.


    - LALR(1) and CLR(1) rely on the LR(1) item i.e LR(0) + Look head ($) symbols
    - Have more number of entries and statues than the LR(0) items due to the look ahead symbols.
    - Along with each production there is an Look ahead symbol associated, which is computed as First of items after . in each production for each step.
    - CLR(1) parsing table size can be reduced by merging the Shift steps (State numbers), which will make the look aheads irrelevant.  
    - CLR(1) parsing table can be converted to LALR(1) parsing table by joining the States thereby ignoring the look ahead symbols.


    (iii) LALR(1) Parser :
        - The states and look aheads in CLR(1) are merged .
        - May results in RR (reduce/reduce) conflict due to merging of look aheads.

    (iv) CLR(1) Parser :



    Notes:
    =====
    - Number of steps in CLR(1) >= Number of states in LR(0), SLR(1) & LALR(1) parsing table.
    i.e CLR(1)  >=  LR(0) = SLR(1) = LALR(1)  
  


=====================================================================


SDTs : Syntax Directed Translations are of 2 types

(i) S - Attributed SDT :

    - If an SDT uses only synthesised attributes, it is called as S-attributed SDT.
    - These attributes are evaluated using S-attributed SDTs that have their semantic actions     written after the production (right hand side).









(ii) L - Attributed SDT :

    - This form of SDT uses both synthesised and inherited attributes with restriction of not taking values from right siblings.
    - In L-attributed SDTs, a non-terminal can get values from its parent, child, and sibling nodes.
  
    As in the following production
    S → ABC
    S can take values from A, B, and C (synthesised). A can take values from S only. B can take     values from S and A. C can get values from S, A, and B. No non-terminal can get values     from the sibling to its right.



Attributes in L-attributed SDTs are evaluated by depth-first and left-to-right parsing manner.






We may conclude that if a definition is S-attributed, then it is also L-attributed as L-attributed definition encloses S-attributed definitions.




  - SDTs are used for evaluating an arithmetic expression based on arithmetic precedence and associativity.
   - SDTs are also used for infix to postfix translations
Ex. A+B —> AB+

   
=====================================================================









Sunday, December 6, 2015

For NEW students joining IISc


Joining IISc

Bring
  • at least 6 passport size photos and 6 stamp size photos.
  • Original mark sheets and at least two copies of each sheet.
  • Offer letter, fees paid, train tickets for attending interview and joining
  • PAN card and its photocopy for a bank account. For student accounts, there is no need for a PAN card, but if you want a regular account, you need a pan card.
It might be a good idea to arrive a day or two before the reporting date and avail the hostel accommodation (does not apply to UG admissions).
 
Getting to IISc


IISc is 6 km away from the main railway station and bus stand (commonly called Majestic, though the official name is Kempe Gowda Bus stand) and 38 km from the airport. Autos cost Rs 13 per km while taxis cost Rs 18 per km. Prepaid taxis from the railway station and airport cost Rs 300 and Rs 750, respectively. If you are coming for the first time to Bangalore by train or bus, you can take a prepaid auto for Rs. 90 and come to IISc. If you do not have any luggage, you can take a local bus from Majestic (use the overbridge/subway) to cross to the local bus stand. Go to platform 23/24/25 and take any of the 252, 271 series buses. The frequency of the buses is one per two min during peak hours, one per five min during non-peak hours and one per ten min during odd hours (11 pm - 5 am). Ask for Tata Institute (that's what IISc. is commonly known as); that'll get you to 18th cross, Malleswaram or the Yeshwantpur bus stand.

Remember always to ask for Tata Institute, whether it is a bus, auto or taxi. People usually land up at IIM if they ask for IISc.

If you are coming on July 27/28/29/30, then the student council of IISc will pick you up from the Bangalore city station and Yeshwanthpur station. Buses with IISc banners will be there in the parking lot. The main Majestic bus stand is adjacent to the Bangalore city station.

If you are coming from the airport, you can take the Vajra Vayu service (a/c buses) and ask for Mekhri Circle. There are buses almost every 15 minutes from the airport. They will charge you Rs. 150 per person. Get down in Mekhri Circle and ask for Tata Institute. It is 2-3 km and you can take an auto.
First day at IISc

On the date of reporting, go to the faculty hall in the main building. Tokens will be issued as soon as you come and there will be a long queue. You will be called based on this admission number and then your marks sheets, GATE score card etc will be verified. Your photo will be taken and you will be issued an identity card. A 16 digit student registration number (SRN) will be alloted to you. Your first job in IISc is to memorize this number because you will require it everywhere. Please take your photographs with you and you may have to stick it in many places.

Next, visit the hostel office. Pay the hostel fees and show the proof of hostel fees paid and ask for a room allotment. There is a lucky dip and you will be allotted a room. You will be given a room allotment letter, which will serve as your proof of residence.

If you are aware of who will be your research supervisor, be sure to visit his/her office and say hello. Then visit your department office and get all the course registration forms. Each department will hold an orientation program and make sure you get those dates and times.

Next, visit either SBI or Canara Bank and get a new account.

Purchase a SIM card. The BSNL office is near the main building. Private vendors also sell different private phone operators near the hostel office. Many will not sell postpaid options.

Purchase mattresses, pillows etc from the vendors near the hostel office. These vendors are available only during this period. Many other small items like buckets, hangers etc. can be purchased in the stores on IISc campus.

Campus
IISc is a huge residential colony and has all facilities within the campus obviating a need to leave the campus for anything. Take a look at the campus map. 
Here are some places on the campus that you might wish to visit during the next few days. There's the Main Library, the Campus Bookstore (near the Cafe, open noon to 7 pm, except Sundays), the Xerox centres (in the library, near the Cafe and near TMC, open 8:30 am to 8:30 pm), two banks (Canara bank and SBI on your map), and a Health Centre (marked Hospital on your map). For outgoing phones, you may go to the STD/ISD booth west of Canara bank, or to the Telecom Centre (7 am to 10 pm, except Sundays).  The Post office (open 9-5 Mon-Sat) is approached from Prof. CNR Rao circle. It handles registered, speed post, money order etc. in addition to regular mail. The Janatha bazaar (called the Cosmetic Center on your map) is open Mon-Sat and houses a small shop (7-1l am, 5-8 pm), a tailor (10 am-6 pm), a launderer (10 am-8 pm), a cobbler (8 am-8 pm), and a men's hairdresser (7 am-noon, 4-7:30 pm). The launderer, cobbler and hairdresser are also available Sunday mornings. Effective June, 2007, a cycle shop has been opened here. There is a watch repair shop in E-Block (between 4-6 pm on all days except Sunday) and a cycle repair shop near U-Block. There is also a small shopping area at the north end of campus, which has a tailor, travel agent and bookbinder. On the south side of the highway is the Gymkhana. Please do NOT cross the road but use the yellow over-bridge to cross the highway. Inside is a large hall for badminton, table tennis, and movies. In addition there is a carrom room and a TV room; upstairs is a small Library and a music room. Outside is a large playground, a running track and the Students Auditorium. At the Gymkhana, you can obtain permission to use the swimming pool located at the other end of campus (near the guest house). 

Prakurthi (marked Cafe on your map) offers chats, various kinds of food, snacks and coffee/tea from 7:30 am to 3 am  on all days.  Kabini (marked TMC on your map) offers snacks, tea and coffee from 7 am to 6 pm on all days. A good restaurant near Kabini called Nesara is open from 8 am to 11 pm and is slightly more expensive than other eating places on campus but it may be worth it. There is Snacks shop in side of CEDT (opens on week days between 10:45 & 11:30 AM and 3 & 3:45 PM). The small shopping area at the far north end of the campus near the ECE department has a bakery and a fruit/vegetable shop. In addition, vendors seated on the road near TMC, SBI and NCSI offer fresh fruit, groundnuts, etc. in the daytime (but not on Sundays). 



You can learn more about the campus.
Accommodation nearby
If you are coming with your parents, your parents may need to stay in a hotel. A few hotels are listed below

Akshara Regency
No4/20, Mathekere First Main Road, Gokula First stage
Bangalore.
Phone 080 2337 7746, 09036970369
Rate ~ 550-950 Rs for 2 persons


Ujwal Residency
#50, Yeswathpur-Mathikere Main Road
Bangalore
Phone 080 40974573, 09986131989
Rate - 750-1000 Rs for 2 persons

Sree Nandanam Residency
#11 Gokul First Stage
Mathikere Main Road
Bangalore
Phone : 080 40926908, 40926909
Rate: 550-1000 Rs for two persons

Sri Akshaya Deluxe Lodges
#4 Mathikere Main Road
Bangalore
Phone 080 2347 0128, 09731925513
Rate Rs 500- 800 for two persons

Krishinton Suites
# 993, M.S. Ramaiah Main Road, Near IISc
D-Gate, Mathikere, Bangalore - 560 054,
+91 80 4259 5959
Rate 1200-2000 for 2 persons

The Basil
8, Sampige Road, Malleshwaram, Bangalore - 560 003
080- 40402323
Rate 2000 for 2 persons

New Sagar Comforts & Banquets
No.5 & 6-8, 11th Mn Rd, 2nd Phs, 1st STG,
Yeshwanthpur Mn Rd, Gokula Extension, Bangalore - 560054
+(91)-(80)-40084999, 41273453
Rate 400-800

Disclaimer: The above is based on my experience and is personal advice. It does not represent IISc's opinion or represent any commercial interest.

Friday, December 4, 2015

DataStructures Quick Notes:


Complete Binary Tree:
==================

Total number of Nodes :
2pow(n+1)-1


Internal Nodes:
1 to [n/2]


Leaf nodes:
[n/2]+1 to n


Max no. of nodes at a level or height:
n/[2pow(h+1)]


====================================================


How many distinct binary search trees can be created out of 4 distinct keys?
Ans:
No. of BST= 2nCn/(n+1)
Therefore 8C4/5 =14

=======================================================

A binary tree T has 20 leaves. The number of nodes in T having two children is ______.

Ans:
In Binary tree If there are N leaf nodes then the number of Nodes having two children will be N-1. So in this case answer will be 20-1, means 19.

=======================================================

*** A complete graph Kn has  power(n, n-2) spanning trees.

=======================================================


Simple Graph
A graph with no loops and no parallel edges is called a simple graph.
* The maximum number of edges possible in a single graph with ‘n’ vertices is nC2 where nC2 = n(n – 1)/2.

* The number of simple graphs possible with ‘n’ vertices = 2nc2 = 2n(n-1)/2.




The maximum number of edges with n=3 vertices −
nC2 = n(n–1)/2
   = 3(3–1)/2
   = 6/2
   = 3 edges

The maximum number of simple graphs with n=3 vertices −
2nC2 = 2n(n-1)/2
   = 23(3-1)/2
   = 23
   = 8


=======================================================

Example 1:
Let ‘G’ be a simple graph with nine vertices and twelve edges, find the number of edges in 'G-'.

You have, |E(G)| + |E('G-')| = |E(Kn)|

12 + |E('G-')| =  9(9-1) / 2 = 9C2
12 + |E('G-')| = 36

|E('G-')| = 24


=======================================================

Example 2:
‘G’ is a simple graph with 40 edges and its complement 'G−' has 38 edges. Find the number of vertices in the graph G or 'G−'.

Let the number of vertices in the graph be ‘n’.

We have, |E(G)| + |E('G-')| = |E(Kn)|

40 + 38 = n(n-1)/2

156 = n(n-1)

13(12) = n(n-1)

n = 13

=======================================================


Circuit Rank
Let ‘G’ be a connected graph with ‘n’ vertices and ‘m’ edges. A spanning tree ‘T’ of G contains (n-1) edges.
Therefore, the number of edges you need to delete from ‘G’ in order to get a spanning tree = m-(n-1), which is called the circuit rank of G.


Example 1:

For the graph given in the above example, you have m=7 edges and n=5 vertices.

Then the circuit rank is

G = m – (n – 1)
  = 7 – (5 – 1)
  = 3

=======================================================


Example 2:

Let ‘G’ be a connected graph with six vertices and the degree of each vertex is three. Find the circuit rank of ‘G’.

By the sum of degree of vertices theorem,

n

i=1 deg(Vi) = 2|E|

6 × 3 = 2|E|

|E| = 9

Circuit rank = |E| – (|V| – 1)

= 9 – (6 – 1) = 4

=======================================================


Euler’s Path
An Euler’s path contains each edge of ‘G’ exactly once and each vertex of ‘G’ at least once. A connected graph G is said to be traversable if it contains an Euler’s path.
Example

Euler’s Path = d-c-a-b-d-e.


=======================================================


Example 3:

Let ‘G’ be a connected planar graph with 20 vertices and the degree of each vertex is 3. Find the number of regions in the graph.

Solution:

By the sum of degrees theorem,

20

i=1 deg(Vi) = 2|E|

20(3) = 2|E|

|E| = 30


By Euler’s formula,

|V| + |R| = |E| + 2

20+ |R| = 30 + 2

|R| = 12

Hence, the number of regions is 12.

=======================================================

A self-complementary graph is a graph which is isomorphic to its graph complement.

All self-complementary graphs have graph diameter 2 or 3
A disconnected graph has infinite diameter


By definition, a self-complementary graph must have exactly half the total possible number of edges, i.e., n(n-1)/4 edges for a self-complementary graph on n vertices. Since n(n-1) must be divisible by 4, it follows that n=0 or 1 (mod 4).


=======================================================


Red-Black Tree

An extended rooted binary tree satisfying the following conditions:
1. Every node has two children, each colored either red or black.
2. Every tree leaf node is colored black.
3. Every red node has both of its children colored black.
4. Every path from the root to a tree leaf contains the same number (the "black-height") of black nodes.

=======================================================


Sunday, November 29, 2015

Code Snippets


#include <stdio.h>
#include <math.h>

int main(){

  printf("\n");
 
  if(0) {
 
  //Divide by 2
          int i = 1024;
          while(i>0){
                  printf("%d,", i);
                i >>= 1;
          }
          printf("\b ");   

  } else {
 
    //Multiply by 2
          int j = 0;
          for(j=0; j < 10; j++){
              //printf("%d,", (int)pow(2, j));
             
              printf("%d,", 1 << j);

          }
          printf("\b ");   
  }
  printf("\n\n");

  return 0;
}