Knowing the terminologies beyond being an Ignorant.

Terminologies Beyond being an Ignorant.

There is no software that can hack facebook passwords (except key logging and phishing) and this goes same with e-mail account hacking. And these are some of the queries and desires one could start off primitive constructional hacking from, but they often fall off the edge and go destructive blackhat hacking for the curiosity. Curiosity is such an agent which could change and push forward the envelope. But this, if not done the right way will only lead to false reality of what is really not constructional nor is the reality in itself. Every one else seem tired of the questions and yet there really seems to be confusion on what to admire, are the guys who work hard to be on the white side, or the guys who still work hard at the black side of breaking the security. Now when I say ‘security’, strictly it does not have to be application security or network security, think bigger and there is physical security, mobile security, personal security; a person who bypass security or any counter-measures which are in place to prohibit access is known to be generally called as a ‘cracker‘ or a ‘hacker‘.

But don’t get hooked by the terms. They are both very different. It depends on the person on which side he chose to be. That is if a person has chosen the darker side, he would be in a long run be called and termed as a cracker. If not, he belongs to the ‘hacker’ category. And the latter is the category we would be talking about since I have really no interest on the other side of the fence where relatively today or later, things keep getting worse with law enforcement going stricter. Now, a cowboy fro curiosity might just explore both the sides and this is the gray area. Such people would be called as a Gray Hat. Let’s walk straight to the points and see some terminologies which could be mentioned to illuminate some of the people who had been missing a lot of what, why, how and the where’s. Here are some terminologies related to computer science but are inclined on the side of ‘computer hacks’ on a broader scope.

Kernel is the main component of most computer operating systems; it is a bridge between applications and the actual data processing done at the hardware level. The kernel’s responsibilities include managing the system’s resources (the communication between hardware and software components). Usually as a basic component of an operating system, a kernel can provide the lowest-level abstraction layer for the resources (especially processors and I/O devices) that application software must control to perform its function. It typically makes these facilities available to application processes through inter-process communication mechanisms and system calls.

Linux is a computer operating system which is based on free and open source software. Although many different varieties of Linux exist, all are Unix-like and based on the Linux kernel, an operating system kernel. The Linux was originally a ‘kernel’ where lines of code were added by the community later to make it better and better and now, Linux has so many distributions with 1000’s of lines of code and utilities.

An exploit (from the verb to exploit, in the meaning of using something to one’s own advantage) is a piece of software, a chunk of data, or sequence of commands that takes advantage of a bug, glitch or vulnerability in order to cause unintended or unanticipated behavior to occur on computer software, hardware, or something electronic (usually computerized). This frequently includes such things as gaining control of a computer system.

A shell is a piece of software that provides an interface for users of an operating system which provides access to the services of a kernel. However, the term is also applied very loosely to applications and may include any software that is “built around” a particular component, such as web browsers and email clients that are “shells” for HTML rendering engines. The name shell originates from shells being an outer layer of interface between the user and the internals of the operating system (the kernel).

PHP is a general-purpose server-side scripting language originally designed for web development to produce dynamic web pages. For this purpose, PHP code is embedded into the HTML source document and interpreted by a web server with a PHP processor module, which generates the web page document. It also has evolved to include a command-line interface capability and can be used in standalone graphical applications.

JQuery is a cross-browser JavaScript library designed to simplify the client-side scripting of HTML. It was released in January 2006 at BarCamp NYC by John Resig. Used by over 49% of the 10,000 most visited websites, jQuery is the most popular JavaScript library in use today.

A network host is a computer connected to a computer network. A network host may offer information resources, services, and applications to users or other nodes on the network. A network host is a network node that is assigned a network layer host address.

Algorithm: In mathematics and computer science an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function Algorithms are used for calculation, data processing, and automated reasoning. In simple words an algorithm is a step-by-step procedure for calculations.

There are many websites that can be searched for vulnerabilities and can be hacked but if you are a real hacker then you should select the website and then try to hack it and this is termed as target hacking.

A router is a device that forwards data packets between computer networks, creating an overlay inter-network. A router is connected to two or more data lines from different networks. When data comes in on one of the lines, the router reads the address information in the packet to determine its ultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. Routers perform the “traffic directing” functions on the Internet.

A data packet is typically forwarded from one router to another through the networks that constitute the inter-network until it gets to its destination node.

In computer networks, a proxy server is a server (a computer system or an application) that acts as an intermediary for requests from clients seeking resources from other servers.

BB5 unlocking in Nokia phones is not possible to install unsigned OS in Nokia (not simlock).

The Metasploit Project is an open-source computer security project which provides information about security vulnerabilities and aids in penetration testing and IDS signature development. Its most well-known sub-project is the Metasploit Framework, a tool for developing and executing exploit code against a remote target machine. Other important sub-projects include the Opcode Database,shell code archive, and security research.

There is not a method to decrypt nokia MCUSW file and change it because if we do it then the check sum is changed than that of phone and its not installed. Symbian can be hacked by using ROM patcher and hello.

Free hosting websites don’t allow to use rapid leech script and other forums.

Unix (officially trademarked as UNIX, sometimes also written as Unix) is a multitasking, multi-user computer operating system originally developed in 1969 by a group of AT&T employees at Bell Labs, including Ken Thompson, Dennis Ritchie, Brian Kernighan, Douglas McIlroy, and Joe Ossanna. The Unix operating system was first developed in assembly language.

A scripting language, script language, or extension language is a programming language that allows control of one or more applications. “Scripts” are distinct from the core code of the application, as they are usually written in a different language and are often created or at least modified by the end-user. Scripts are often interpreted from source code or bytecode

Cross-site scripting (XSS) is a type of computer security vulnerability typically found in Web applications that enables attackers to inject client-side script into Web pages viewed by other users. A cross-site scripting vulnerability may be used by attackers to bypass access controls such as the same origin policy. This is a Application (Web Application) Security vulnerability and could be classified into different types of Cross Site Scripting attacks such as persistent, non-persistent and DOM-based. There are contexts which are bypassed with using certain characters if not already black-listed or the application isn’t using white-list for allowing only certain legacy characters into the application input. Output encoding is one of the many methods to stop this kind of attack.

A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer resource unavailable to its intended users. Although the means to carry out, motives for, and targets of a DoS attack may vary, it generally consists of the concerted efforts of a person, or multiple people to prevent an Internet site or service from functioning efficiently or at all, temporarily or indefinitely

A Media Access Control address (MAC address) is a unique identifier assigned to network interfaces for communications on the physical network segment. MAC addresses are used for numerous network technologies and most IEEE 802 network technologies including Ethernet. Logically, MAC addresses are used in the Media Access Control protocol sub-layer of the OSI reference model.

Social engineering is the art of manipulating people into performing actions or divulging confidential information. While similar to a confidence trick or simple fraud, the term typically applies to trickery or deception for the purpose of information gathering, fraud, or computer system access; in most cases the attacker never comes face-to-face with the victim.

Rooting is a process that allows users of mobile phones and other devices running the Android operating system to attain privileged control (known as “root access”) within Android’s Linux subsystem with the goal of overcoming limitations that carriers and manufacturers put on some devices. It is analogous to jailbreaking on devices running the Apple iOS operating system.

Tethering means sharing the Internet connection of an Internet-capable mobile phone with other devices. This sharing can be offered over a wireless LAN (Wi-Fi), or over Bluetooth, or by physical connection using a cable. In the case of tethering over wireless LAN, the feature may be branded as a mobile hotspot. The Internet-connected mobile phone acts as a portable router when providing tethering services to others.

Malware, short for malicious software, consists of programming (code, scripts, active content, and other software) designed to disrupt or deny operation, gather information that leads to loss of privacy or exploitation, gain unauthorized access to system resources, and other abusive behavior.The expression is a general term used by computer professionals to mean a variety of forms of hostile, intrusive, or annoying software or program code.

A honeypot is a trap set to detect, deflect, or in some manner counteract attempts at unauthorized use of information systems. Generally it consists of a computer, data, or a network site that appears to be part of a network, but is actually isolated and monitored, and which seems to contain information or a resource of value to attackers.

A cache is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere. If requested data is contained in the cache (cache hit), this request can be served by simply reading the cache, which is comparatively faster. Otherwise (cache miss), the data has to be recomputed or fetched from its original storage location, which is comparatively slower. Hence, the more requests can be served from the cache the faster the overall system performance.

A Trojan horse, or Trojan, is software that appears to perform a desirable function for the user prior to run or install, but (perhaps in addition to the expected function) steals information or harms the system. The term is derived from the Trojan Horse story in Greek mythology.

Overclocking is the process of operating a computer component at a higher clock rate (more clock cycles per second) than it was designed for or was specified by the manufacturer.

The MD5 Message-Digest Algorithm is a widely used cryptographic hash function that produces a 128-bit (16-byte) hash value., MD5 has been employed in a wide variety of security applications, and is also commonly used to check data integrity.

An assembly language is a low-level programming language for computers, microprocessors, microcontrollers, and other programmable devices. It implements a symbolic representation of the machine codes and other constants needed to program a given CPU architecture.

A hash function is any algorithm or subroutine that maps large data sets to smaller data sets, called keys. For example, a single integer can serve as an index to an array (associative array). The values returned by a hash function are called hash values, hash codes,hash sums, check-sums or simply hashes.

In computer security and programming, a buffer overflow, or buffer overrun, is an anomaly where a program, while writing data to a buffer, overruns the buffer’s boundary and overwrites adjacent memory. This is a special case of violation of memory safety. Buffer overflows can be triggered by inputs that are designed to execute code, or alter the way the program operates. This may result in erratic program behavior, including memory access errors, incorrect results, a crash, or a breach of system security. They are thus the basis of many software vulnerabilities and can be maliciously exploited.

Remote File Inclusion (RFI) is a type of vulnerability most often found on websites. It allows an attacker to include a remote file, usually through a script on the web server. The vulnerability occurs due to the use of user-supplied input without proper validation. This can lead to something as minimal as outputting the contents of the file, but depending on the severity.

SQL often referred to as Structured Query Language is a programming language designed for managing data in relational database management systems (RDBMS). Originally based upon relational algebra and tuple relational calculus, its scope includes data insert, query, update and delete, schema creation and modification, and data access control. SQL injection or SQLi is a code injection technique that exploits a security vulnerability in some computer software. An injection occurs at the database level of an application (like queries). The vulnerability is present when user input is either incorrectly filtered for string literal escape characters embedded in SQL statements or user input is not strongly typed and unexpectedly executed. Using well designed query language interpreters can prevent SQL injections.

Here are some tips and factsheets you would love to check since they are mostly universal in the hackerdom culture and people know it by default. If you do not know this by default, you are missing something and need to work on it:

  1. Ankit Fadia’s seminars are crap and its courses too.
  2. Getting access to router doesn’t provide you access to network.
  3. One cannot just press the key and hack-away a system or bring down the SCADA power grids.
  4. There is no powerful software or an antivirus program or utility which could detect all the malware which are existent.

Would add many if there are feedbacks on some ideas and what could be possibly be missing. Leave your feedbacks in the comments and I’d appreciate if you could whip out some original sources. Roger Out.

Advertisements

C – Practical System Programming for Graphics in C.

Practical System Programming for Graphics in C

By Shritam Bhowmick
Web Application Penetration Tester
LinkedIn: https://www.linkedin.com/profile/view?id=281014248&trk=nav_responsive_tab_profile
Academia: https://independent.academia.edu/ShritamBhowmick
Facebook: https://www.facebook.com/coded32

This is an introduction post for “Graphics and Internet” for WBUT 2014, which includes C as system programming and the underlying source code concepts for drawing basic graphics using C. Earlier in this post, I discussed data-structures in C for WBUT University in details which covered the practical aspects of year 2014. This post in contrary will mention the graphical aspect of the C programming keeping in mind the practical labs. I am glad I could post as far as the syllabus is concerned (which is really outdated as discussed in the last post), but this will help the current students who could have a look here after 2014 passes.

===========================================================================

1.) Draw a Basic Circle:

#include
#include

main()
{
int gd = DETECT, gm;

initgraph(&gd, &gm, "C:TCBGI");

circle(100, 100, 50);

getch();
closegraph();
return 0;
}

===========================================================================

2.) Draw a Rainbow

#include
#include
#include
#include

void main()
{
int gdriver = DETECT,gmode;
int x,y,i;
initgraph(&gdriver,&gmode,"C:TurboC4BGI");
x=getmaxx()/2;
y=getmaxy()/2;
for(i=30;i<200;i++)
{
delay(100);
setcolor(i/10);
arc(x,y,0,180,i-10);
}
getch();
}

===========================================================================

3.) Draw a Smiley

#include
#include
#include
#include

void main()

{
int gd=DETECT,gm;
initgraph(&gd,&gm,"C:BGI");
outtextxy(150,10,"This is a program to draw a Smiley Asshole!");
circle(200,200,80);
circle(160,180,8);
circle(240,180,8);
arc(200,230,180,360,20);
line(190,245,210,260);
line(210,245,210,260)
arc(200,260,180,360,9);
line(200,180,195,210);
arc(202,210,190,10,8);
getch();
closegraph();

}

===========================================================================

4.) Draw a Diagonal Line

#include < stdio.h >
#include < conio.h >
#include < graphics.h >

void main()
{
int gdriver=DETECT,gmode=0;
initgraph(&gdriver, &gmode, "c:tcbgi");
line(0,0,getmaxx(),getmaxy());
getch();
closegraph();

}

===========================================================================

5.) Draw an Eclipse

#include
#include

main()
{
int gd = DETECT, gm;

initgraph(&gd, &gm, "C:TCBGI");

ellipse(100, 100, 0, 360, 50, 25);

getch();
closegraph();
return 0;
}

===========================================================================

5.) Draw a Triangle

#include
#include
#include

void main() {
int gd = DETECT, gm;
initgraph(&gd, &gm, "c:tcbgi");

line(300, 100, 200, 200);
line(300, 100, 400, 200);
line(200, 200, 400, 200);

getch();
closegraph();
}

There are others as well which are related to Web Development. The posts belonging to Web-development could already be accessed from here. The section is for everything related to ‘code’ and working with code in different languages. HTML could be found there. I would appreciate if the readers leave behind a feedback. Roger out./-

Data Structures in C Practicals and Why Colleges with their IDE’s are Outdated. A Wiser Proof of Concept.

Data Structures in C and Why Colleges with their IDE’s are Outdated!

By Shritam Bhowmick
Web Application Penetration Tester
LinkedIn: https://www.linkedin.com/profile/view?id=281014248&trk=nav_responsive_tab_profile
Academia: https://independent.academia.edu/ShritamBhowmick
Facebook: https://www.facebook.com/coded32

The Introduction Preface

Hi, this is about all the practical lab C  for WBUT Winters (Semester 3rd of the University prescribed Syllabus) section data-structures questions solved at once place for a ready reference to the students who might seek help and also on an information as to why colleges now should be moving from the stone age. All the programs are complied under Tubro C++ compiler and would not be/should not be expected to be executed/processed to create an object under GNU C compiler or  any Windows variant compilers (such as Visual C compiler). Since the syllabus itself is too outdated and people had been talking about them in various forums; I came up with a compilation of the code as well as the questions which could help them understand the core with the ready service code to be executed only within from Turbo C++ compiler. This is C code and not C++ code. The source code for each of them had been practically tested on Windows 8.1 platform using DOSBOX as an emulator to run Turbo C.

The Recommendations

It’s recommended to move on with the syllabus and pay less attention there, if anyone really is serious on C and C++ programming. Python, Ruby, Delphi, Lua and Perl are the modern spectacular languages to choose from and Java is really not that friendly as you might think if you have to do a certain task. Java isn’t that flexible in terms that Java would really need long source code for a simple problem. See “Python to other languages – A comparison“. Get to know more at Stackoverflow. To refernce from the original source, here’s is the justification why someone would prefer to code in Python rather than any other older languages in a nutshell:

  1. Speed of development. I can crank out working code before the Java folks have stuff that will compile.
  2. Flexibility. I can refactor and rework a module in an afternoon. The Java folks have to redesign things on paper to be sure they can get some of the bigger refactorings to recompile. In Java, the changes aren’t as small and focused.I should probably blog about some of the nastier refactorings that would be a huge pain — even using the refactoring tools in Eclipse or NetBeans. The most recent example was a major change to the top-most superclass of a class hierarchy that would have invalidated a mountain of Java code. But the method was used in exactly two places, allowing a trivial change to the superclass and those two places. 100’s of unit tests passed, so I’m confident in the change without the compiler overhead.
  3. Simplicity. Python doesn’t need a sophisticated IDE and super-complex WS libraries. Python code can be written in Komodo Edit. A WS interface between your Java front-end and PostgresSQL can be knocked together in Werkzeug in a day. You can plug your transactions into this without much overhead.

The post primarily covered Python against Java, but there are instances why some one could just prefer Python over C which could be referenced from here in this post. This does not at all mean to compare C and Python on basis of what they have to deliver but rather compares them on an average on the end-users opinion perspective. There are other considerations as well into modern computing which is resolved below in the list.

Why Would Python be Slow but equivalently faster ever than C Deployments?

Python is a higher level language than C, which means it abstracts the details of the computer from you – memory management, pointers, etc, and allows you to write programs in a way which is closer to how humans think. It is true that C code usually runs 10 to 100 times faster than Python code if you measure only the execution time. However if you also include the development time Python often beats C. For many projects the development time is far more critical than the run time performance. Longer development time converts directly into extra costs, fewer features and slower time to market.

Why use C contrary to the switching to any other programming languages, a hope out of no-where!?

“If there ever were a quote that described programming with C, it would be this. To many programmers, this makes C scary and evil. It is the Devil, Satan, the trickster Loki come to destroy your productivity with his seductive talk of pointers and direct access to the machine. Then, once this computational Lucifer has you hooked, he destroys your world with the evil “segfault” and laughs as he reveals the trickery in your bargain with him.

But, C is not to blame for this state of affairs. No my friends, your computer and the Operating System controlling it are the real tricksters. They conspire to hide their true inner workings from you so that you can never really know what is going on. The C programming language’s only failing is giving you access to what is really there, and telling you the cold hard raw truth. C gives you the red pill. C pulls the curtain back to show you the wizard. C is truth. Why use C then if it’s so dangerous? Because C gives you power over the false reality of abstraction and liberates you from stupidity.”

The Coverage

Now since we are already hopped into C with Turbo C as a compiler. I’d go writing the source code of the practicals here and try to keep it updated as much as I could to my disposal. Instead this is the responsibility at your side to post back with comments on what else its to be added so this could help others in the process.

Notes

The source code has replaced all the instances of \n to n. Take care of that plus the include headers have not been defined to be kept away from copy/paste practices. This in my opinion will lead to a better coding practices and would lead to a great start with self-research.

The Question Set

The questions I have dug so far follows these:

  1. Doubly Linked List with insertion and deletion.
  2. Priority Queue with addition and deletion of elements.
  3. Dequeue or Double Ended Queue using Linked List.
  4. Dequeue or Double Ended Queue using Circular Array.
  5. Priority Queue Implementation using Arrays.
  6. Circular Queue using Linked List with Circular Queue Concept Notes.
  7. Circular Queue Implementation in C Data-Structures using Arrays.
  8. Stack for PUSH and POP operations in C both via recursive and regular methods.
  9. Linear Search Implementation in C using Arrays.
  10. Binary Search Implementation in C using Arrays.
  11. Insertion Sort Implementation in C using Arrays.
  12. Merge Sort Implementation in C using Arrays.
  13. Quick Sort Implementation in C using Arrays.
  14. Heap Sort Implementation in C using Arrays.

Source Code Solutions

1.) Doubly Linked List with Insertion, Deletion and Reverse Display

Here’s the source code (to avoid copy/paste gimmicks, I have stripped off all the includes. Feel free to add them at your own expense) for Doubly Linked List:

#include
#include
#include
struct node
{
struct node *previous;
int data;
struct node *next;
}*head, *last;

void insert_beginning(int value)
{
struct node *var,*temp;
var=(struct node *)malloc(sizeof(struct node));
var->data=value;
if(head==NULL)
{
head=var;
head->previous=NULL;
head->next=NULL;
last=head;
}
else
{
temp=var;
temp->previous=NULL;
temp->next=head;
head->previous=temp;
head=temp;
}
}

void insert_end(int value)
{
struct node *var,*temp;
var=(struct node *)malloc(sizeof(struct node));
var->data=value;
if(head==NULL)
{
head=var;
head->previous=NULL;
head->next=NULL;
last=head;
}
else
{
last=head;
while(last!=NULL)
{
temp=last;
last=last->next;
}
last=var;
temp->next=last;
last->previous=temp;
last->next=NULL;
}
}

int insert_after(int value, int loc)
{
struct node *temp,*var,*temp1;
var=(struct node *)malloc(sizeof(struct node));
var->data=value;
if(head==NULL)
{
head=var;
head->previous=NULL;
head->next=NULL;
}
else
{
temp=head;
while(temp!=NULL && temp->data!=loc)
{
temp=temp->next;
}
if(temp==NULL)
{
printf("n%d is not present in list ",loc);
}
else
{
temp1=temp->next;
temp->next=var;
var->previous=temp;
var->next=temp1;
temp1->previous=var;
}
}
last=head;
while(last->next!=NULL)
{
last=last->next;
}
}
int delete_from_end()
{
struct node *temp;
temp=last;
if(temp->previous==NULL)
{
free(temp);
head=NULL;
last=NULL;
return 0;
}
printf("nData deleted from list is %d n",last->data);
last=temp->previous;
last->next=NULL;
free(temp);
return 0;
}

int delete_from_middle(int value)
{
struct node *temp,*var,*t, *temp1;
temp=head;
while(temp!=NULL)
{
if(temp->data == value)
{
if(temp->previous==NULL)
{
free(temp);
head=NULL;
last=NULL;
return 0;
}
else
{
var->next=temp1;
temp1->previous=var;
free(temp);
return 0;
}
}
else
{
var=temp;
temp=temp->next;
temp1=temp->next;
}
}
printf("data deleted from list is %d",value);
}

void display()
{
struct node *temp;
temp=head;
if(temp==NULL)
{
printf("List is Empty");
}
while(temp!=NULL)
{
printf("-> %d ",temp->data);
temp=temp->next;
}
}

int main()
{
int value, i, loc;
head=NULL;
printf("Select the choice of operation on link list");
printf("n1.) insert at begningn2.) insert at atn3.) insert at middle");
printf("n4.) delete from endn5.) reverse the link listn6.) display listn7.)exit");
while(1)
{
printf("nnenter the choice of operation you want to do ");
scanf("%d",&i);
switch(i)
{
case 1:
{
printf("enter the value you want to insert in node ");
scanf("%d",&value);
insert_beginning(value);
display();
break;
}
case 2:
{
printf("enter the value you want to insert in node at last ");
scanf("%d",&value);
insert_end(value);
display();
break;
}
case 3:
{
printf("after which data you want to insert data ");
scanf("%d",&loc);
printf("enter the data you want to insert in list ");
scanf("%d",&value);
insert_after(value,loc);
display();
break;
}
case 4:
{
delete_from_end();
display();
break;
}
case 5:
{
printf("enter the value you want to delete");
scanf("%d",value);
delete_from_middle(value);
display();
break;
}
case 6 :
{
display();
break;
}
case 7 :
{
exit(0);
break;
}
}
}
printf("nn%d",last->data);
display();
getch();
}

2.) Priority Queue with Addition and Deletion of Elements

#include
#include
#define MAX 5

void insert_by_priority(int);
void delete_by_priority(int);
void create();
void check(int);
void display_pqueue();

int pri_que[MAX];
int front, rear;

void main()

{

int n, ch;
printf("n1 - Insert an element into queue");
printf("n2 - Delete an element from queue");
printf("n3 - Display queue elements");
printf("n4 - Exit");
create();

while (1)

{

printf("nEnter your choice : ");

scanf("%d", &ch);

switch (ch)

{

case 1:

printf("nEnter value to be inserted : ");

scanf("%d",&n);

insert_by_priority(n);

break;

case 2:

printf("nEnter value to delete : ");

scanf("%d",&n);

delete_by_priority(n);

break;

case 3:

display_pqueue();

break;

case 4:

exit(0);

default:

printf("nChoice is incorrect, Enter a correct choice");

}

}

}

/* Function to create an empty priority queue */

void create()

{

front = rear = -1;

}

/* Function to insert value into priority queue */

void insert_by_priority(int data)

{

if (rear >= MAX - 1)

{

printf("nQueue overflow no more elements can be inserted");

return;

}

if ((front == -1) && (rear == -1))

{

front++;

rear++;

pri_que[rear] = data;

return;

}

else

check(data);

rear++;

}

/* Function to check priority and place element */

void check(int data)

{

int i,j;

for (i = 0; i <= rear; i++) { if (data >= pri_que[i])

{

for (j = rear + 1; j > i; j--)

{

pri_que[j] = pri_que[j - 1];

}

pri_que[i] = data;

return;

}

}

pri_que[i] = data;

}

/* Function to delete an element from queue */

void delete_by_priority(int data)

{

int i;

if ((front==-1) && (rear==-1))

{

printf("nQueue is empty no elements to delete");

return;

}

for (i = 0; i <= rear; i++)

{

if (data == pri_que[i])

{

for (; i < rear; i++)

{

pri_que[i] = pri_que[i + 1];

}

pri_que[i] = -99;

rear--;

if (rear == -1)

front = -1;

return;

}

}

printf("n%d not found in queue to delete", data);

}

/* Function to display queue elements */

void display_pqueue()

{

if ((front == -1) && (rear == -1))

{

printf("nQueue is empty");

return;

}

for (; front <= rear; front++)

{

printf(" %d ", pri_que[front]);

}

front = 0;

}

3.) Dequeue or Double Ended Queue using Linked List

What is Dequeue?

The word dequeue is short form of double ended queue. In a dequeue , insertion as well as deletion can be carried out either at the rear end or the front end. The following diagram illustrates a version of the same concept:

Dequeue Represented Using a Circular Array

The source code is available below:

#define MAX 100
#include
#include

void insert(int);
int delStart();
int delEnd();
int queue[MAX];
int rear =0, front =0;
void display();
void insertEnd(int);
void insertStart(int);

int main()
{
char ch , a='y';
int choice, c, token;
printf("Enter your choice for which deque operation you want to perform operation n");
do
{
printf("n1.Input-restricted deque n");
printf("2.output-restricted deque n");
printf("nEnter your choice for the operation : ");
scanf("%d",&c);
switch(c)
{
case 1:
printf("nDo operation in Input-Restricted c dequen");
printf("1.Insert");
printf("n2.Delete from end");
printf("n3.Delete from begning");
printf("n4.show or display");
do
{
printf("nEnter your choice for the operation in c deque: ");
scanf("%d",&choice);
switch(choice)
{
case 1: insertEnd(token);
display();
break;
case 2: token=delEnd();
printf("nThe token deleted is %d",token);
display();
break;
case 3: token=delStart();
printf("nThe token deleted is %d",token);
display();
break;
case 4: display();
break;
default:printf("Wrong choice");
break;
}
printf("nDo you want to continue(y/n) to do operation in input-restricted c deque: ");
ch=getch();
}
while(ch=='y'||ch=='Y');
getch();
break;

case 2 :
printf("nDo operation in Output-Restricted c dequen");
printf("1.Insert at the End");
printf("n2.Insert at the begning");
printf("n3.Delete the element");
printf("n4.show or display");
do
{
printf("nEnter your choice for the operation: ");
scanf("%d",&choice);
switch(choice)
{
case 1: insertEnd(token);
display();
break;
case 2: insertStart(token);
display();
break;
case 3: token=delStart();
printf("nThe token deleted is %d",token);
display();
break;
case 4: display();
break;
default:printf("Wrong choice");
break;
}
printf("nDo you want to continue(y/n):");
ch=getch();
}
while(ch=='y'||ch=='Y');
getch();
break ;
}

printf("nDo you want to continue(y/n):");
ch=getch();
}
while(ch=='y'||ch=='Y');
getch();
}

void display()
{
int i;
printf("nThe queue elements are:");
for(i=rear;i<front;i++)
{
printf("%d  ",queue[i]);
}
}

void insertEnd(int token)
{
char a;
if(front==MAX/2)
{
printf("nQueue fullnyou cant enter more elements at the end of c queue");
return;
}
do
{
printf("nEnter the token to be inserted:");
scanf("%d",&token);
queue[front]=token;
front=front+1;
printf("do you want to continue insertion Y/N");
a=getch();
}
while(a=='y');
}

void insertStart(int token)
{
char a;
if(front==MAX/2)
{
printf("nQueue fullnyou cant enter more elements at the start of queue");
return;
}
do
{
printf("nEnter the token to be inserted:");
scanf("%d",&token);
rear=rear-1;
queue[rear]=token;
printf("do you want to continue insertion Y/N");
a=getch();
}
while(a=='y');
}
int delEnd()
{
int t;
if(front==rear)
{
printf("nQueue empty");
return 0;
}
front=front-1;
t=queue[front+1];
return t;
}
int delStart()
{
int t;
if(front==rear)
{
printf("nQueue empty");
return 0;
}
rear=rear+1;
t=queue[rear-1];
return t;
}

4.) Dequeue or Double Ended Queue using Circular Array

The available source code is different from the previous one which used linked list. This time the code uses the concept for circular array and implements a Dequeue or Double Ended Queue implementation around circular array. The source code is below as per the references.

#include
#include

#define MAX 30

typedef struct dequeue
{
int data[MAX];
int rear,front;
}dequeue;

void initialize(dequeue *p);
int empty(dequeue *p);
int full(dequeue *p);
void enqueueR(dequeue *p,int x);
void enqueueF(dequeue *p,int x);
int dequeueF(dequeue *p);
int dequeueR(dequeue *p);
void print(dequeue *p);

void main()
{
int i,x,op,n;
dequeue q;
initialize(&q);

do
{
printf("n1.Createn2.Insert(rear)n3.Insert(front)n4.Delete(rear)n5.Delete(front)");
printf("n6.Printn7.ExitnnEnter your choice:");
scanf("%d",&op);

switch(op)
{
case 1: printf("nEnter number of elements:");
scanf("%d",&n);
initialize(&q);
printf("nEnter the data:");

for(i=0;i<n;i++) { scanf("%d",&x); if(full(&q)) { printf("nQueue is full!!"); exit(0); } enqueueR(&q,x); } break; case 2: printf("nEnter element to be inserted:"); scanf("%d",&x); if(full(&q)) { printf("nQueue is full!!"); exit(0); } enqueueR(&q,x); break; case 3: printf("nEnter the element to be inserted:"); scanf("%d",&x); if(full(&q)) { printf("nQueue is full!!"); exit(0); } enqueueF(&q,x); break; case 4: if(empty(&q)) { printf("nQueue is empty!!"); exit(0); } x=dequeueR(&q); printf("nElement deleted is %dn",x); break; case 5: if(empty(&q)) { printf("nQueue is empty!!"); exit(0); } x=dequeueF(&q); printf("nElement deleted is %dn",x); break; case 6: print(&q); break; default: break; } }while(op!=7); } void initialize(dequeue *P) { P->rear=-1;
P->front=-1;
}

int empty(dequeue *P)
{
if(P->rear==-1)
return(1);
return(0);
}

int full(dequeue *P)
{
if((P->rear+1)%MAX==P->front)
return(1);
return(0);
}

void enqueueR(dequeue *P,int x)
{
if(empty(P))
{
P->rear=0;
P->front=0;
P->data[0]=x;
}
else
{
P->rear=(P->rear+1)%MAX;
P->data[P->rear]=x;
}
}

void enqueueF(dequeue *P,int x)
{
if(empty(P))
{
P->rear=0;
P->front=0;
P->data[0]=x;
}
else
{
P->front=(P->front-1+MAX)%MAX;
P->data[P->front]=x;
}
}

int dequeueF(dequeue *P)
{
int x;
x=P->data[P->front];
if(P->rear==P->front)   //delete the last element
initialize(P);
else
P->front=(P->front+1)%MAX;
return(x);
}

int dequeueR(dequeue *P)
{
int x;
x=P->data[P->rear];
if(P->rear==P->front)
initialize(P);
else
P->rear=(P->rear-1+MAX)%MAX;
return(x);
}

void print(dequeue *P)
{
if(empty(P))
{
printf("nQueue is empty!!");
exit(0);
}

int i;
i=P->front;
while(i!=P->rear)
{
printf("n%d",P->data[i]);
i=(i+1)%MAX;
}
printf("n%dn",P->data[P->rear]);
}

5.) Priority Queue using Array

I noticed I didn’t updated a priority queue implementation using arrays. This is the reason the source code is here for a ready reference to the readers of the blog. Here is the original version of the source code which is tested against the terms and conditions posted before which should be able to run via a Turbo C compiler. Any other compiler just won’t work because the syllabus itself is outdated, and people (and universities) must realize this fast.

#include
#include
#define size 5

int queue[5][2] = {0};
int top = -1;
int bottom;

void push(int value, int pr)

{

int i,j,k;

if(top < size-1) { if(queue[top][1] > pr)

{

for(i=0;i<top;i++) { if(queue[i][1] > pr)

{

break;

}

}

for(j=top;j>=i;j--)

{

queue[j+1][0] = queue[j][0];

queue[j+1][1] = queue[j][1];

}

top++;

queue[i][0] = value;

queue[i][1] = pr;

}

else

{

top++;

queue[top][0] = value;

queue[top][1] = pr;

}

}

else

{

printf("queue overflow n");

}

}

void pop()

{

int i;

if(queue[0][0] == 0)

{

printf("n The queue is empty  n");

}

else

{

printf("After , dequeue the following value is erased n  %d n", queue[0][0]);

for(i=0;i<top;i++) { queue[i][0] = queue[i+1][0]; queue[i][1] = queue[i+1][1]; } queue[top][0] = 0; queue[top][1] = 0; top--; } } void display() { int i,j; printf("ElementtPriority n"); for(i=size - 1;i>=0;i--)

{

for(j=0;j<2;j++)

{

printf(" %dt",queue[i][j]);

}

printf("n");

}

}

int main()

{

int i,j, ch=0 ,value = 0,pr=0;

while(1)

{

printf("n Please Enter the choice. n");

printf("1 for Enqueue n 2 for Dequeue n 3 for displayn  5 for exit: t n");

scanf("%d",&ch);

switch(ch)

{

case 1:

printf("n Please Enter the number to be inserted: t ");

scanf("%d", &value);

printf("n Please Enter the priority: t ");

scanf("%d", &pr);

push(value,pr);

break;

case 2:

pop();

break;

case 3:

display();

break;

case 5:

exit(0);

default:

printf("You entered wrong choice!");

}
}
}

6.) Circular Queue using Linked List in C Data-structures with Notes

In a standard queue data structure re-buffering problem occurs for each dequeue operation. To solve this problem by joining the front and rear ends of a queue to make the queue as a circular queue
Circular queue is a linear data structure. It follows FIFO principle.

  • In circular queue the last node is connected back to the first node to make a circle.
  • Circular linked list fallow the First In First Out principle
  • Elements are added at the rear end and the elements are deleted at front end of the queue
  • Both the front and the rear pointers points to the beginning of the array.
  • It is also called as “Ring buffer”.
  • Items can inserted and deleted from a queue in O(1) time.

Circular Queue can be created in three ways. They are:

  • · Using single linked list
  • · Using double linked list
  • · Using arrays

Follow is the source code for implementing a Circular Queue in C using Linked List:

#include
#include
#define max 5
int q[max];
int rear = -1, front = -1;
void insert();
void delet();
void display();

void main()
{
char ch;
int choice;
do
{
clrscr();
printf("n1 -> Insert");
printf("n2 -> Delete");
printf("n3 -> Display");
printf("n0 -> Exit");

printf("nnEnter Your Choice -> ");
scanf("%d", &choice);
switch (choice)
{
case 1:
insert();
break;
case 2:
delet();
break;
case 3:
display();
break;
case 0:
exit();
default:
printf("nnInvalid Choice");
}
printf("nnDo u Want to Continue -> ");
ch = getche();
} while (ch == 'y' || ch == 'Y');
}

void insert()
{
int data;
if ((rear == max - 1 && front == 0) || rear == front - 1)
{
printf("nSorry! Q is Full");
}
else
{
printf("nEnter data You want to insert ->");
scanf("%d", &data);
if (front == -1)
{
front++;
rear++;
}
else if (rear == max - 1)
{
rear = 0;
}
else
{
rear++;
}
q[rear] = data;
printf("nnData inserted successfully");
}
}
void delet()
{
if (front == -1)
{
printf("nnSorry! Q is Empty");
}
else
{
if (front == rear)
{
front = rear = -1;
}
else if (front == max - 1)
{
front = 0;
}
else
{
front++;
}
printf("nElement deleted Successfully");
}
}

void display()
{
int i;
if (front == -1)
{
printf("nnSorry! Q is Empty");
}
else
{
printf("nn:: Queue Elements are ::n");
if (front <= rear)
{
for (i = front; i <= rear; i++)
{
printf("n%d", q[i]);
}
}
else
{
for (i = front; i < max; i++)
{
printf("n%d", q[i]);
}
for (i = 0; i <= rear; i++)
{
printf("n%d", q[i]);
}
}
}
}

7.) Circular Queue Implementation in C using Arrays

Earlier in the post, we talked about implementation circular queue using Linked List. This section talks about the same implementation but using arrays. The source code below could be used for the purposes to demonstrate the concepts:

#include
#include
#include
#define MAXSIZE 5
int cq[MAXSIZE];
int front,rear;
void main()
{
void add(int,int);
void del(int);
int will=1,i,num;
front = -1;
rear = -1;
clrscr();
printf("nProgram for Circular Queue demonstration through array");
while(1)
{
printf("nnMAIN MENUn1.INSERTIONn2.DELETIONn3.EXIT");
printf("nnENTER YOUR CHOICE : ");
scanf("%d",&will);
switch(will)
{
case 1:
printf("nnENTER THE QUEUE ELEMENT : ");
scanf("%d",&num);
add(num,MAXSIZE);
break;
case 2:
del(MAXSIZE);
break;
case 3:
exit(0);
default: printf("nnInvalid Choice . ");
}

} //end of  outer while
}               //end of main
void add(int item,int MAX)
{
//rear++;
//rear= (rear%MAX);
if(front ==(rear+1)%MAX)
{
printf("nnCIRCULAR QUEUE IS OVERFLOW");
}
else
{
if(front==-1)
front=rear=0;
else
rear=(rear+1)%MAX;
cq[rear]=item;
printf("nnRear = %d    Front = %d ",rear,front);
}
}
void del(int MAX)
{
int a;
if(front == -1)
{
printf("nnCIRCULAR QUEUE IS UNDERFLOW");
}
else
{
a=cq[front];
if(front==rear)
front=rear=-1;
else
front = (front+1)%MAX;
printf("nnDELETED ELEMENT FROM QUEUE IS : %d ",a);
printf("nnRear = %d    Front = %d ",rear,front);

}
}

Here is a similar implementation of a queue using array. The source code could be referenced from the below:

#include
#include
#define N 6

int queue[N]={0};
int rear=0,front=0;

void insert(void);
void del(void);
void disp(void);
void cre(void);

void main()
{
int user=0;

clrscr();
while(user!=4)
{
clrscr();
printf("nnnttt THE SIZE OF QUEUE IS %d",N);
printf("nt 1.INSERT");
printf("nt 2.DELETE");
printf("nt 3.DISPLAY");
printf("nt 4.EXIT");
printf("nt 5.CREATE");
scanf("%d",&user);
switch(user)
{
case 1:
insert();
break;
case 2:
del();
break;
case 3:
disp();
break;
case 4:
printf("nt THANK U");
break;
case 5:
cre();
break;
}
getch();

}
getch();
}

void insert(void)
{
int t;
if(rear<N)
{
printf("nt ENTER A VALUE IN QUEUE");
scanf("%d",&t);
queue[rear]=t;
rear++;
}
else
{
printf("nt Q OVERFLOW!!!!!!!!!!!!!!!");
}
}
void del(void)
{
int i;
printf("nt %d gets deleted.........",queue[front]);
queue[front]=0;
front++;
}
void disp(void)
{
int i;
for(i=front;i<rear;i++)
{
printf("nt %d",queue[i]);
}
}
void cre(void)
{

int t;
printf("nt ENTER A VALUE IN QUEUE");
scanf("%d",&t);
front=0;
queue[front]=t;
rear=front+1;
}

8.) Stack Implementation to perform PUSH and POP Operations

After having gone through the Queue utilities in C and discussing queue with circular based and using Linked List, here is the source code to implement Stack for PUSH and POP operations in C.

#include
#define MAXSIZE 5

struct stack
{
int stk[MAXSIZE];
int top;
};
typedef struct stack STACK;
STACK s;

void push(void);
int  pop(void);
void display(void);

void main ()
{
int choice;
int option = 1;
s.top = -1;

printf ("STACK OPERATIONn");
while (option)
{
printf ("------------------------------------------n");
printf ("      1    -->    PUSH               n");
printf ("      2    -->    POP               n");
printf ("      3    -->    DISPLAY               n");
printf ("      4    -->    EXIT           n");
printf ("------------------------------------------n");

printf ("Enter your choicen");
scanf    ("%d", &choice);
switch (choice)
{
case 1:
push();
break;
case 2:
pop();
break;
case 3:
display();
break;
case 4:
return;
}
fflush (stdin);
printf ("Do you want to continue(Type 0 or 1)?n");
scanf    ("%d", &option);
}
}
/*  Function to add an element to the stack */
void push ()
{
int num;
if (s.top == (MAXSIZE - 1))
{
printf ("Stack is Fulln");
return;
}
else
{
printf ("Enter the element to be pushedn");
scanf ("%d", &num);
s.top = s.top + 1;
s.stk[s.top] = num;
}
return;
}
/*  Function to delete an element from the stack */
int pop ()
{
int num;
if (s.top == - 1)
{
printf ("Stack is Emptyn");
return (s.top);
}
else
{
num = s.stk[s.top];
printf ("poped element is = %dn", s.stk[s.top]);
s.top = s.top - 1;
}
return(num);
}
/*  Function to display the status of the stack */
void display ()
{
int i;
if (s.top == -1)
{
printf ("Stack is emptyn");
return;
}
else
{
printf ("n The status of the stack is n");
for (i = s.top; i >= 0; i--)
{
printf ("%dn", s.stk[i]);
}
}
printf ("n");
}

Here is yet another version of Stack implementation but using recursive method:

#include

#include

struct node

{

int a;

struct node *next;

};

void generate(struct node **);

void display(struct node *);

void stack_reverse(struct node **, struct node **);

void delete(struct node **);

int main()

{

struct node *head = NULL;

generate(&head);

printf("nThe sequence of contents in stackn");

display(head);

printf("nInversing the contents of the stackn");

if (head != NULL)

{

stack_reverse(&head, &(head->next));

}

printf("nThe contents in stack after reversaln");

display(head);

delete(&head);

return 0;

}

void stack_reverse(struct node **head, struct node **head_next)

{

struct node *temp;

if (*head_next != NULL)

{

temp = (*head_next)->next;

(*head_next)->next = (*head);

*head = *head_next;

*head_next = temp;

stack_reverse(head, head_next);

}

}

void display(struct node *head)

{

if (head != NULL)

{

printf("%d  ", head->a);

display(head->next);

}

}

void generate(struct node **head)

{

int num, i;

struct node *temp;

printf("Enter length of list: ");

scanf("%d", &num);

for (i = num; i > 0; i--)

{

temp = (struct node *)malloc(sizeof(struct node));

temp->a = i;

if (*head == NULL)

{

*head = temp;

(*head)->next = NULL;

}

else

{

temp->next = *head;

*head = temp;

}

}

}

void delete(struct node **head)

{

struct node *temp;

while (*head != NULL)

{

temp = *head;

*head = (*head)->next;

free(temp);

}

}

9.) Linear Search Implementation in C

This is an abstract source code of Linear search utility in C. This could be used along in the Turbo C compiler. Others were specifically not tested and not recommended.

#include

int main()
{
int array[100], search, c, n;

printf("Enter the number of elements in arrayn");
scanf("%d",&n);

printf("Enter %d integer(s)n", n);

for (c = 0; c < n; c++)
scanf("%d", &array[c]);

printf("Enter the number to searchn");
scanf("%d", &search);

for (c = 0; c < n; c++)
{
if (array[c] == search)     /* if required element found */
{
printf("%d is present at location %d.n", search, c+1);
break;
}
}
if (c == n)
printf("%d is not present in array.n", search);

return 0;
}

10.) Binary Search using Arrays

The source code below is the implementation of binary search algorithm in C using arrays.

#include
int main(){

int a[10],i,n,m,c=0,l,u,mid;

printf("Enter the size of an array: ");
scanf("%d",&n);

printf("Enter the elements in ascending order: ");
for(i=0;i<n;i++){
scanf("%d",&a[i]);
}

printf("Enter the number to be search: ");
scanf("%d",&m);

l=0,u=n-1;
while(l<=u){
mid=(l+u)/2;
if(m==a[mid]){
c=1;
break;
}
else if(m<a[mid]){
u=mid-1;
}
else
l=mid+1;
}
if(c==0)
printf("The number is not found.");
else
printf("The number is found.");

return 0;
}

11.) Insertion Sort Implementation in C using Arrays.

This is insertion sort in C using Arrays. The source code is available below:

#include

int main()
{
int n, array[1000], c, d, t;

printf("Enter number of elementsn");
scanf("%d", &n);

printf("Enter %d integersn", n);

for (c = 0; c < n; c++) {
scanf("%d", &array[c]);
}

for (c = 1 ; c <= n - 1; c++) { d = c; while ( d > 0 && array[d] < array[d-1]) {
t          = array[d];
array[d]   = array[d-1];
array[d-1] = t;

d--;
}
}

printf("Sorted list in ascending order:n");

for (c = 0; c <= n - 1; c++) {
printf("%dn", array[c]);
}

return 0;
}

12.) Merge Sort in C using Arrays

Here is the source code for Merge sort in C using Arrays:

#include
#define MAX 50

void mergeSort(int arr[],int low,int mid,int high);
void partition(int arr[],int low,int high);

int main(){

int merge[MAX],i,n;

printf("Enter the total number of elements: ");
scanf("%d",&n);

printf("Enter the elements which to be sort: ");
for(i=0;i<n;i++){
scanf("%d",&merge[i]);
}

partition(merge,0,n-1);

printf("After merge sorting elements are: ");
for(i=0;i<n;i++){
printf("%d ",merge[i]);
}

return 0;
}

void partition(int arr[],int low,int high){

int mid;

if(low<high){
mid=(low+high)/2;
partition(arr,low,mid);
partition(arr,mid+1,high);
mergeSort(arr,low,mid,high);
}
}

void mergeSort(int arr[],int low,int mid,int high){

int i,m,k,l,temp[MAX];

l=low;
i=low;
m=mid+1;

while((l<=mid)&&(m<=high)){

if(arr[l]<=arr[m]){ temp[i]=arr[l]; l++; } else{ temp[i]=arr[m]; m++; } i++; } if(l>mid){
for(k=m;k<=high;k++){
temp[i]=arr[k];
i++;
}
}
else{
for(k=l;k<=mid;k++){
temp[i]=arr[k];
i++;
}
}

for(k=low;k<=high;k++){
arr[k]=temp[k];
}
}

13.) Quick Sort Implementation in C using Arrays

Below is the raw source code for Quick Sort implementation in C using Arrays:

#include

void quicksort(int [10],int,int);

int main(){
int x[20],size,i;

printf("Enter size of the array: ");
scanf("%d",&size);

printf("Enter %d elements: ",size);
for(i=0;i<size;i++)
scanf("%d",&x[i]);

quicksort(x,0,size-1);

printf("Sorted elements: ");
for(i=0;i<size;i++)
printf(" %d",x[i]);

return 0;
}

void quicksort(int x[10],int first,int last){
int pivot,j,temp,i;

if(first<last){
pivot=first;
i=first;
j=last;

while(i<j){
while(x[i]<=x[pivot]&&i<last) i++; while(x[j]>x[pivot])
j--;
if(i<j){
temp=x[i];
x[i]=x[j];
x[j]=temp;
}
}

temp=x[pivot];
x[pivot]=x[j];
x[j]=temp;
quicksort(x,first,j-1);
quicksort(x,j+1,last);

}
}

14.) Heap Sort Implementation in C using Arrays

Below is the source code for Heap Sort in C using Arrays:

#include
void main()

{

int heap[10], no, i, j, c, root, temp;

printf("n Enter no of elements :");

scanf("%d", &no);

printf("n Enter the nos : ");

for (i = 0; i < no; i++)

scanf("%d", &heap[i]);

for (i = 1; i < no; i++)

{

c = i;

do

{

root = (c - 1) / 2;

if (heap[root] < heap[c])   /* to create MAX heap array */

{

temp = heap[root];

heap[root] = heap[c];

heap[c] = temp;

}

c = root;

} while (c != 0);

}

printf("Heap array : ");

for (i = 0; i < no; i++) printf("%dt ", heap[i]); for (j = no - 1; j >= 0; j--)

{

temp = heap[0];

heap[0] = heap[j    /* swap max element with rightmost leaf element */

heap[j] = temp;

root = 0;

do

{

c = 2 * root + 1;    /* left node of root element */

if ((heap[c] < heap[c + 1]) && c < j-1)

c++;

if (heap[root]<heap[c] && c<j)    /* again rearrange to max heap array */

{

temp = heap[root];

heap[root] = heap[c];

heap[c] = temp;

}

root = c;

} while (c < j);

}

printf("n The sorted array is : ");

for (i = 0; i < no; i++)

printf("t %d", heap[i]);

printf("n Complexity : n Best case = Avg case = Worst case = O(n logn) n");

}

15.) Bubble Sort Implementation in C using Arrays

Below is the source code for bubble sort in C using Arrays:

#include

int main()
{
int array[100], n, c, d, swap;

printf("Enter number of elementsn");
scanf("%d", &n);

printf("Enter %d integersn", n);

for (c = 0; c < n; c++)
scanf("%d", &array[c]);

for (c = 0 ; c < ( n - 1 ); c++)
{
for (d = 0 ; d < n - c - 1; d++) { if (array[d] > array[d+1]) /* For decreasing order use < */
{
swap       = array[d];
array[d]   = array[d+1];
array[d+1] = swap;
}
}
}

printf("Sorted list in ascending order:n");

for ( c = 0 ; c < n ; c++ )
printf("%dn", array[c]);

return 0;
}

5. Operating System – Round Robin CPU Scheduling and Multilevel Queue

Operating System – Round Robin Scheduling

By Shritam Bhowmick
Web Application Penetration Tester
LinkedIn: https://www.linkedin.com/profile/view?id=281014248&trk=nav_responsive_tab_profile
Academia: https://independent.academia.edu/ShritamBhowmick
Facebook: https://www.facebook.com/coded32

Round Robin Scheduling

Previous post discussed the two types of CPU Scheduling, Preemptive and Non-Preemptive CPU Scheduling and was focused with FCFS, SJF and Priority Scheduling techniques. We had discussed in the last post how SJF and Priority Scheduling could be modified to attain preemptive scheduling and why FCFS scheduling cannot be used as preemptive scheduling. This post will go further with another preemptive type of CPU Scheduling which is known as Round Robin CPU Scheduling. Now, as we already know Preemptive Scheduling means forcibly stopping a job from the execution and keep it in the waiting state/blocked state for a period of time until it finishes executing another job execution processing, Round Robin is another instance of Preemptive Scheduling wherein it is possible to stop the job and pick up another job from the READY QUEUE  and start executing it. But there is a difference in the way Round Robin Scheduling in implemented.

In SJF (modified to SRT {Shortest Remaining Time First Scheduling}) and Priority (modified with LONG TERM and SHORT TERM Scheduler), it was possible to attain preemptive CPU Scheduling, and used time (CPU BURST TIME) and priority respectively; this however isn’t the case with Round Robin Scheduling (does not depend on CPU BURST Time of the job, or the priority of the job. All the jobs are treated equally). Round Robin Scheduling, the CPU time is divided into number of quanta. Assume the CPU quanta to be duration = 2 ms (Milli-Second). Now, whenever a job is allocated to the CPU for execution, the job will be executed for a maximum of 2 ms. If the CPU BURST time required for the job execution to be completed is more than 2 ms, after 2 ms the job will be forcibly terminated and pushed back to the READY QUEUE and the new job from the READY QUEUE will be allocated to the CPU for execution and again that this new job will be only until 2 ms since that is the quanta defined in terms of Round Robin CPU Scheduling. In case the job execution requirement for any job is less than 2 ms, the job terminates normally and not forcefully. So after termination of the the job in case the job finishes before 2 ms, the Job might go to the I/O WAITING STATE/BLOCK STATE for the CPU to handle I/O operations or might go to the HALTED STATE/TERMINATED STATE. The CPU can’t stay idle for the remaining time if the jobs has finished before the determined quanta, hence the best efficient applied as per Round Robin implementation is to take a new job and start executing the job from that point of time and hence save time with efficiency. This means, assume if the quanta is 2 ms and the job was finished in 1 ms, a new job from the READY QUEUE is taken by the CPU Scheduler and executed, this new job quanta time starts from the earlier saved time which was 1 ms (2-1 = 1 ms). So for an example, there are 4 jobs in the READY QUEUE to be executed by the CPU :

J1 = 4 ms
J2 = 2 ms
J3 = 3 ms
J4 = 6 ms

Assumption is CPU quanta to be 2 ms. That is 1 quanta = 2 ms. Now for Round Robin CPU Scheduling, the job is taken as FCFS basis, but a limited quanta is available for all the jobs per execution cycle, that is in this case 2 ms (1 quanta). The timeline, these jobs will be served is as follows:

J1    J2    J3    J4

|—–|—–|—–|—–|
2ms 2ms 2ms 2ms

Hence, on the first execution cycle:

J1 = 2 ms completed, remaining: 2 ms of CPU BURST time.
J2 = 2 ms completed, remaining: 0 ms of CPU BURST time.
J3 = 2 ms completed, remaining: 1 ms of CPU BURST time.
J4 = 2 ms completed, remaining: 4 ms of CPU BURST time.

Since, Round Robin CPU Scheduler is taking the jobs in FCFS (First Come First Served) basis, there is no priority or CPU time burst minimum time required job first assigned with the scheduling process. The allocation is done purely on first come, first served basis this way and each execution cycle for each job spends 1 quanta of time that is 2 ms which is in this case. After 1 quanta of time spent on a particular job, the job if remaining CPU BURST remains, get’s to the READY QUEUE and next job is taken. If this next job is processed and only required the exact processing time equal to that of the 1 quanta time, the job can either be pushed to the I/O WAITING STATE to cover I/O operation or might reach the HALTED STATE/TERMINATED STATE which is normal Job execution termination. If the Job has finished before the 1 quanta time, the CPU Scheduler without wasting any time pushes the next job from the READY QUEUE to the ACTIVE STATE and starts executing it until the 1 quanta time period which is being assigned similarly to other jobs. When jobs are pushed to the READY QUEUE, they always are pushed to the back of the READY QUEUE since other jobs which are finished first should be at the first to be taken by the CPU. This means the CPU takes the job from the READY QUEUE head, which again means it is following all the rules of the FCFS basis. It has also to be noted that jobs could be pushed to the READY QUEUE from the I/O WAIT STATE, NEW STATE, or ACTIVE STATE (in-case of forcible termination). Here, we need to look at the process diagram to verify if the jobs could be taken to the READY QUEUE in three ways:

ifwelook

The process state diagram suggests, that indeed the READY QUEUE (READY STATE as per process state diagram) get’s the jobs from all the three directions, that is the active state when forcible termination of job is done by the CPU, the new state when new arrivals of jobs are scheduled to be transferred from the new state to the ready state and the wait or blocked state when the jobs are awaiting I/O operations. Hence, after the first execution cycle, the timeline for the above example would look like this:

J1    J2    J3    J4   J1   J3    J4    J4

|—–|—–|—–|—–|—–|—|——|——|
2ms 2ms 2ms 2ms 2ms 1ms 2ms 2ms (assuming no new jobs have arrived at the READY QUEUE)

Now, after this timeline, all the JOBS will be completed assuming no new job arrivals were in the READY QUEUE from the NEW STATE. Hence all the jobs are treated equally in the Round Robin CPU Scheduling. There must be a timer in the system to remind the CPU what 1 quanta is over for a job, this brings the topic to a new curve. The timer we are talking about here could be programmable as well since in some installations, an administrator might want a small quanta, in other installations, the same administrator might want a higher quanta. SO this timer should be programmable, to interrupt the processor reminding the CPU that 1 quanta is past and a job must be stopped.

Timer (Programmable)

Whenever the CPU Scheduler decides that a new job has to be allocated to the CPU or the CPU allocates a new job for execution, immediately the timer has to be reset. At the end of the time pointer, the timer will generate an interrupt when again the CPU will go to the system program and the responsibility of that system program will be to terminate the current executed job, get a new job from the READY QUEUE, give it to the CPU for execution and re-start the timer. That is what happens when the required CPU BURST time in more than the quanta set in the timer. If the CPU BURST time is less then that of the quanta described in the timer, the process is not executed completely till the timer gives an interrupt. The execution is completed before the timer gives an interrupt but the process for the whole cyclic execution process is not yet completed wholly. So the last statement of every CPU BURST must be a system call. The system call for performing an I/O operation or a system call indicating that the process is complete. Again following the system call, the CPU Scheduler has to fetch a new process (job) from the READY QUEUE, give it to the CPU for execution and at the same time, the timer has to be reset so that the next time quanta starts from that point.

Variations of Round Robin Scheduling

There could be different variations of the Round Robin Scheduling, earlier there were no considerations on the Round Robin Scheduling but there could be Jobs which require more CPU BURST time duration depending if the CPU BURST time is more for I/O Operations (I/O Bound Jobs) or CPU Time (CPU Bound Jobs). The variation will depend on these CPU BURST time duration requirements! The main algorithm used will be Round Robin Scheduling.

To implement these concepts, there is a need to know MULTI-LEVEL QUEUE

MULTI-LEVEL QUEUE

There could be multiple READY QUEUE’s as such the following demonstrating how it would work:

multi-levelqueueThe entire diagram suggests that there could be multiple READY QUEUE which could be taken into consideration depending upon the jobs since the variation of jobs have either I/O operation taking the CPU BURST or CPU time taking the time duration of the CPU Processing. This way, if efficiency has to be maintained such that I/O Operations are given the highest priority, the multiple READY QUEUE would have (for an example) 1 READY QUEUE divided into 3 READY QUEUE, they could be:

  1. Q1
  2. Q2
  3. Q3

Here are the Jobs which would be as per the priority of execution (Round Robin Scheduling Implementation does not use Priority, here we are only giving the job prioritizing the jobs in the READY QUEUE not in the Scheduling Algorithm).

  1. Q1 – Handles the I/O Bound Jobs which require more I/O CPU BURST time duration.
  2. Q2 – Handles the moderate CPU Requirement time duration.
  3. Q3 – Handles the CPU Bound Jobs which require more CPU time duration.

Now the execution of such jobs are in the format as below:

  1. Q1- any jobs which are here must be executed at first preference.
  2. Q2 – only after Q1 is empty, the jobs in Q2 are taken. That is all I/O Bound jobs have to be completed first.
  3. Q3 – only after both Q1 and Q2 are empty, the jobs pending on Low priority that is CPU Bound jobs are taken.

There are disadvantages here. Consider that the jobs could be dynamic in nature which means, a job which is an I/O Bound Job could be changed into requiring CPU time (converts itself into being CPU Bound Job). That way, with the current MULTI-LEVEL QUEUE implementation, the jobs in the Q1 (requiring to be executed first in order) in-spite the jobs being transformed to CPU Bound Job by being of Dynamic Nature has to be in the Q1 and executed in first preference. This is unwanted and there has to be some techniques to resolve this. This is where MULTI-LEVEL FEEDBACK QUEUE is used.

MULTILEVEL FEEDBACK QUEUE

In Multilevel feedback queue, there would be again multiple number of queue’s. Assume there are 3 queues as shown below in the diagram (but with a provision that the jobs could be taken from one queue to another queue).

pushq1toq3Whenever the jobs are pushed into the READY QUEUE, it’s always taken to the QUEUE number 1 which is Q1. The time quanta for all the queue’s are:

  1. Q1 = 2 ms
  2. Q2 = 5 ms
  3. Q3 = 10 ms

Whenever a job is entered the first time, let’s assume the nature of the jobs isn’t known that is whether the Job is I/O Bound Job or a CPU Bound Job (what will be the CPU BURST Duration). In queue number 1, the job is to be put. In queue number 1 (Q1) the jobs will be allocated to the CPU using Round Robin Scheduling, for execution of the job. Once it is allocated to the CPU, it will be executed for a minimum of 2 ms, if one fins that the job (The CPU BURST) is over is finished before 2 ms, the job will be kept in Q1 and would not be moved to Q2. This is because the first it has executed, it has shown that the CPU BURST requirement is less than 2 ms, so it’s predicted that the next CPU BURST time will also be less than 2 ms. If the CPU BURST time of the Job is less than 2 ms, the Job remains in Q1. If the timer trigger has occurred and the job is greater than 2 ms, the job will be pushed to next queue which is Q2 which has the time quanta of more than 2 ms. This will again follow the Round Robin CPU Scheduling technique for the job to be allocated to the CPU for execution. This time the time quanta is 5 ms in Q2. In Q2, if one fins that the CPU time requirement for the job is less than 5 ms, it will remain in Q2 itself. Otherwise if it is greater than 5 ms, it will be pushed to the next queue which is Q3, with time quanta 10 ms. All of these queue will follow the Round Robing CU Scheduling algorithm for CPU jobs allocation for job execution.

So conclusively, the first time the job is pushed to the Q1, if the job CPU BURST duration is grater than the current quanta threshold of that particular queue, it would be pushed to the second queue or else be retained there itself. This depends on the nature of the jobs depending upon the requirement of the jobs for the time duration needed for a complete execution of the job with levels of time quanta divided into the READY QUEUE since the QUEUE itself now has been divided according to the time duration using quanta’s. In a similar fashion, the jobs which could be pushed downwards, should be the job require more I/O Bound Operations, the same should be the procedure, to push back the job upwards that is from Q3 to Q2 and then Q2 to Q1 as per the requirement and the nature of the job (if dynamic!). This means if the job at some point of time remain to be I/O Bound, the job remains in the Q1, if at another point of time duration. the job changes it’s nature from I/O Bound to CPU Bound, it could be pushed downward (lower preference). Similarly, low priority jobs (CPU Bound Jobs) could be pushed upwards by increasing it’s priority as per the dynamic nature of the job (changes from CPU Bound job to I/O Bound Job). Hence Jobs are kept at the queue level according to the nature of the job. These are the variations of the Round Robin Technique.

downtoupq3toq2toq1

The diagram above shows clearly how the jobs are now flexibly aligned in accordance to the time duration required and the dynamic nature of the job. If the jobs are changed from CPU Bound job to the I/O bound, it’d go upwards the direction and if the I/O bound jobs are changed to CPU Bound jobs, it’d move downwards the direction as per the requirement. I/O Bound Jobs have the highest priority since it requires less CPU BURST time duration than the CPU bound Jobs which requires more CPU BURST time duration and hence has a lower priority.

A feedforward is also used, since there is a concept called switch time which is the extra time required by the CPU to change from a queue to another queue. The queue could be switched directly to the appropriate queue without having them to move through the intermediate queue. This means if a job has to be transfer from the Q3 to Q1, it is directly possible but needs switch time and some extra CPU processing. This is demonstrated in the diagram below:

feedforwardWe have concluded over preemptive and non-preemptive CPU Scheduling from the last post and concluded this with Round Robin CPU Scheduling. All the posts which covered process management are:

  1. FCFS (First Come First Served) – Nonpreemptive
  2. SJF (Shortest Job First) – Nonpreemptive
  3. Priority Scheduling – Nonpreemptive
  4. SRT (Shortest Remaining Time First) – Preemptive
  5. Round Robin Scheduling – Preemptive
  6. Multilevel Queue – Preemptive

But all these CPU Scheduling which we have discussed are based on a single CPU. The modern operating systems should not remain satisfied with single processor CPU. There are distributing computing nature which is used in modern operating system be it networked processors among many systems or a single set-up having multiple core processors. We discussed single resource with multiple processes among which the resource is to be shared. What we will see next is a setup wherein there are multiple resource which is to be shared with multiple processes.

DISTRIBUTED COMPUTING

There are models which could go under distributed computing:

  1. WORKSTATION MODEL
  2. PROCESSOR POOL MODEL

Workstation Model could be considered as such every user which has one full fledged computer having it’s own memory, hard-disk, etc but shared on for example a LAN network without which the the computer remains functional. Every user has the processing power.

Processor Pool Model could be considered wherein one can have a high-end server having multiple processors and to the users, there could be high-end terminals (example a graphics terminal), so that the user terminal doesn’t have any processing capabilities. Users does not have processing capabilities, the processing is centralized.

Naturally for these two types of models, the approach taken for CPU sharing must be different. There hence must be two different kind of allocation techniques:

  1. NON-MIGRATORY
  2. MIGRATORY

NON-MIGRATORY to some extend is static in nature. In NON-MIGRATORY CPU allocation, once the process is created, it is decided on which of the processors the process has to be executed; once decided it is fixed and that process is executed on that processor until termination.

MIGRATORY is dynamic wherein the process could be migrated from one processor to another depending upon the requirement. The process itself hence could be terminated in another processor and could be originated or allocated to another processor during initiation of the process.

The next post would cover these aspects of CPU process sharing since this post was dedicated to cover Round Robin Scheduling and Multilevel Queue. Single CPU Management and Process Management hence has been covered in these five posts which could be used for a ready reference:

  1. Operating System Introductory
  2. Operating System  – Importance of an Operating System
  3. Operating System – CPU Management
  4. Operating System – Process State Diagram and Basic CPU Scheduling
  5. Operating System – Round Robin CPU Scheduling and Multilevel Queue

Use the links to keep updated on the Process Management of Operating System, next concurrent processing would be covered. Thank you and I bid good-bye to the followers of the blog. Roger out.

4. Operating System – Process State Diagram and CPU Scheduling Basics

Operating System – Process State Diagram and CPU Scheduling Basics.

By Shritam Bhowmick
Web Application Penetration Tester
LinkedIn: https://www.linkedin.com/profile/view?id=281014248&trk=nav_responsive_tab_profile
Academia: https://independent.academia.edu/ShritamBhowmick
Facebook: https://www.facebook.com/coded32

Previous post discussed about CPU scheduling and an introduction towards Process Management; This post will take it further to an introduction to the process management which is required for WBUT 3rd Semester BCA candidates. Particularly this post is dedicated to process state diagram and will cover the entire aspect of the same.

To start off with the details, a program when needs to be executed goes through a process. This process has several state changes in the entire operation until termination of the program. Upon successful termination, the program would get useful results to the user. This entire process progression goes through state changes which are mention below in steps:

  1. The process enters a state called NEW STATE.
  2. The process then enters the READY STATE.
  3. The process then goes to an ACTIVE STATE/RUNNING STATE (Execution of the program starts here).
  4. The process ends with the HALT STATE/TERMINATED STATE (after all the program BURST’s are over. The program might be terminated forcibly or else is terminated normally.)

The following PROCESS STATE DIAGRAM would show the entire operation:

Diagram of process state

Note, there is a intermediary state which is known to be the WAITING STATE/BLOCKED STATE.  The program goes through this particular state when the CPU is busy with interaction with the I/O devices during I/O operations (this is called I/O BURST). During I/O BURST, since the CPU time is being wasted, to avoid this; the pending jobs are brought up from the queue by the CPU Scheduler and then this new job is executed with first READY STATE and then after the whole operation is finished, the original Job is picked up and executed from middle-way. This way, the CPU saves significant amount of time and maintains the efficiency. The following describes the process timeline:

 

programterminationdiagramiooperationtimeline

 

CPU SCHEDULING

According to the process timeline, it could be observed that the program initiation starts with a CPU BURST and is terminated with a CPU BURST as well. During the entire process progression, the CPU has to interact with I/O devices and hence the pending jobs are completed during this time. The original job is held with a WAITING STATE/BLOCKED STATE status and upon completion of the pending process, the original job is taken. From the previous post we had discussed about two types of CPU Scheduling:

  • FCFS (First Come First Serve) CPU Scheduling.
  • SJF (Shortest Job First) CPU Scheduling.

SJF CPU Scheduling saves the time and is an efficient way to schedule jobs. This is done by the CPU Scheduler. Contrary to SJF CPU Scheduling, FCFS CPU Scheduling cannot save time and prediction for the next CPU BURST could not be determined (we need to determine or calculate the amount of time the next CPU BURST would be going to take to maintain an efficiency!) cannot be done. FCFS CPU Scheduling has a strict rule to take a Job, process and execute it and only after the termination of the original job, the next job could be taken by the CPU. Hence there is no real time efficiency scheduling been done with FCFS CPU Scheduling; the job which comes first is executed first and therefore prediction for the next CPU BURST could be applied for the SJF Scheduling algorithm. Since, SJF CPU Scheduling algorithm takes time efficiency into consideration, the CPU must predict the amount of time the CPU has to spend it’s time on NEXT CPU BURST.

CPU BURST: The amount of time spent by the CPU for a program in order to execute the program, process it for the execution.
I/O BURST: The amount of time spent by the CPU for a program interacting with I/O devices for I/O operations.

Now, since a calculated prediction of the NEXT CPU BURST could only be determined for SJF CPU Scheduling, the following is considered as the standard process to calculate an estimate of the amount of time spent for the NEXT CPU BURST which would be yet to occur for a program process:

image018image019

 

Now, there is yet another calculated special case of SJF algorithm which could be used for efficiency purposes. This special case of SJF CPU Scheduling algorithm is known as PRIORITY SCHEDULING. In case of priority scheduling, the priority levels could be  set. In that case the job which has the highest priority should be executed first. Low priority jobs will be executed only after high specified priority jobs had been executed. It could be said that PRIORITY SCHEDULING could be considered as the reciprocal of the NEXT CPU BURST.

NON-PREEMPTIVE CPU SCHEDULING

Now, all of these scheduling algorithms, that is:

  1. FCFS
  2. SJF
  3. Priority

are known to be NON-PREEMPTIVE SCHEDULING. The reason it is known to be NON-PREEMPTIVE Scheduling is since the processing of jobs once allocated, the CPU cannot be taken out of the job until that entire CPU BURST is complete. At the end of the current CPU BURST, another Job could be assigned to the CPU by the CPU Scheduler.

PREEMPTIVE CPU SCHEDULING

The situation with PREEMPTIVE CPU SCHEDULING is wherein the job is in it’s CPU BURST but could be made PREEMPTIVE by allocating another JOB (the current job has to be stopped before it’s natural completion). As per the three CPU Scheduling we have seen so far, the FCFS CPU Scheduling cannot be PREEMPTIVE CPU SCHEDULING because  FCFS has to follow the strict rules regarding the jobs which are sent first must be completed first (executed first), and only after the normal completion of the first job execution, other pending jobs should be executed. The remaining two, that is SJF CPU Scheduling and Priority Scheduling algorithm could be modified to get PREEMPTIVE CPU SCHEDULING.

Let’s take an example of jobs which are in the READY Queue with SJF CPU Scheduling modified to suit PREEMPTIVE CPU SCHEDULING that is SHORTEST REMAINING TIME FIRST CPU SCHEDULING (SRT):

J1 – 15
J2 – 9
J3 – 3
J4 – 5

These jobs are in the ready queue with their CPU BURST time. As per the SJF CPU SCHEDULING, the shortest execution time required has to be picked up which in this case would be J3, since J3 has 3 units of time requirement and others have a longer span of time requirement. Let’s consider the execution has been started and the J3 job has spent 1 unit of time with the CPU Processing:

J3

|———–|
1 UNIT

There are remaining 2 UNITS of time completion left for job J3, but another job which is Job J5 arrives which requires an execution CPU BURST time of 1 unit, so, now on the READY Queue we have:

J1 – 15
J2 – 9
J3 – 2 (Since 1 unit has already been completed, the left over is 2 time units.)
J4 – 5
J5 – 1 (new job which has arrived with time unit 1)

Now, the CPU Scheduler has to check the ready queue to find out which job has the minimum of time requirement as per the policy of SJF CPU SCHEDULING algorithm. The CPU Scheduler would find that the new job which has arrived only requires 1 time unit and hence will allocate the job J5 by PREEMPTIVE procedure of job J3 which was in the middle of the execution; the job J5 will be processed for execution:

J3              J5               2 UNIT LEFT

|———–|———–|______________|
1 UNIT     1 UNIT

So, the Job J5 will be executed since it has been scheduled to be executed in the middle of Job J3’s execution. After completion of Job J5, the CPU Scheduler would again check the READY QUEUE to check the status, and the CPU BURST requirement would be:

J1 – 15
J2 – 9
J3 – 2 (This is the minimum time unit)
J4 – 5
J5 – 0 (Since it is finished)

Assuming no new jobs came to the READY QUEUE with lower time unit requirement as compared to that of Job J3 time unit (which was left!), the CPU Scheduler would assign the CPU to execute J3 which requires 2 time units:

J3              J5                        J3

|———–|———–|———————-|
1 UNIT     1 UNIT              2 UNIT

After the completion of the job J3, the CPU Scheduler yet again has to check the READY QUEUE:

J1 – 15
J2 – 9
J3 – 0
J4 – 5
J5 – 0

Now, the modified SJF algorithm will treat Job J4 to be minimum and start executing it assuming no newer jobs were upfront available in the READY QUEUE. This is how the SJF CPU Scheduling algorithm can be used to maintain efficiency in CPU processing with modification which is known to be SRT (SHORTEST REMAINING TIME FIRST) CPU Scheduling.

Now there must be a way to modify the Priority Scheduling algorithm to obtain an efficient result. This could be done using the same logic but using ‘priority’ in mind. The higher priority job must be executed first. But there would be a conflict using priorities since if a job which always have a high priority in the READY QUEUE than the others has to be executed first and this job comes coming over and again to the READY QUEUE, the lower priority jobs will starve for CPU time and hence might not get executed ever. Now, this situation is unwanted, a concept called ‘aging‘ is used. While the job remains in the READY QUEUE, at regular intervals of time, the CPU will go incrementing the priority of the jobs. Now the priority of the job isn’t decided by the user or the administrator, but it’s decided by the time spent by the CPU for a particular job while other incrementing pending jobs were getting higher priority hits since it’s been aging for CPU time. At one point the maximum priority level will be reached by the job(s) pending, and when it reaches this priority level, the CPU will start executing that job leaving the current job to the WAIT/BLOCK status.

The time taken by the CPU Scheduler which has to be executed by the CPU as well program should be negligible compared to the CPU BURST time of the jobs. Now for a brief overview of what we had discussed here were process block diagram where we talked that a process could migrate from READY state to the ACTIVE state and from the ACTIVE state to the WAITING STATE and then again from the WAITING state to the READY state until the job completion. But with new concepts involved such as PREEMPTIVE CPU Scheduling, there is another route of this migration of state, which hence could be also from the ACTIVE state to the READY state. Hence below is something which completes the Process State Diagram:

pathagain

The ‘blue’ color path is the new path which is available because of PREEMPTIVE CPU Scheduling. With respect to the process state diagram; in a system if there are jobs which require more amount of CPU BURST time, such jobs would be called as ‘CPU Bound Jobs‘, another type of CPU BURST time is where a process would require more I/O time and less CPU time, such jobs are referenced as ‘I/O Bound Jobs‘. Hence two types of Jobs are:

  1. CPU Bound Jobs
  2. I/O Bound Jobs

Now, assume, In the READY Queue, all the jobs are CPU Bound Jobs; this means for none of the jobs, I/O operation is much concerned which again means I/O operation in the whole processing is negligible and very less time units are spent in I/O operations whereas much time is spent over CPU processing. So in such cases, the CPU time taken will be quite high and I/O devices will remain inactive. Now contradictory to the past situation where CPU Bound Jobs were in READY QUEUE and I/O operations were negligible, there could be situation where all the jobs in the READY QUEUE are I/O operation jobs and negligible CPU process jobs, which means time spent by the CPU will be negligible for these jobs and the I/O devices will be very busy since all the jobs have I/O operations. None of the previous and the current situation are wanted or desirable. Because the efficiency lies in the point that a system should be always busy with all of it’s components and in these cases either the CPU remains inactive or the I/O devices remains inactive, hence wasting ‘time’ and resources therefore go without proper management scheduling of the jobs in the first place. To avoid this situation, or rather to schedule the jobs efficiently, there are schedulers.

Technically all of this means that there must be some kind of management at the READY QUEUE to avoid situations wherein resource time are being wasted. To address the problem, there are schedulers. There are two kind of schedulers, one which takes the job from the NEW STATE to the READY STATE (shifting the job from NEW to the main memory which is dubbed as READY QUEUE – The state diagram will describe it as the READY STATE) and another which takes the job from the READY STATE to the ACTIVE STATE. Since the procedure where the job has to be taken from the READY STATE (READY Queue) to the ACTIVE STATE is complex because of very low time span  of the CPU BURST. The CPU BURST time span being short, the Scheduler cannot decide which operation (whether the job has to be migrated from the READY STATE to the ACTIVE STATE or ACTIVE STATE to the READY STATE (READY QUEUE) because there are two routes here {see the diagram above which shows the new path in blue because of PREEMPTIVE CPU Scheduling}) to be done. So this scheduler which takes the job from the READY STATE (READY QUEUE) to the ACTIVE STATE is known to be SHORT TERM SCHEDULER. The other scheduler which takes the job from the NEW STATE to the READY STATE (READY QUEUE) is called LONG TERM SCHEDULER. Therefore the two kinds of Schedulers are:

  1. LONG TERM SCHEDULER
  2. SHORT TERM SCHEDULER

Since we discussed that in the READY QUEUE it is  not desirable to have CPU Bound Jobs or I/O Bound jobs, the LONG TERM SCHEDULER would be responsible to decide what jobs should be sent to the READY STATE (READY QUEUE). The duration of the LONG TERM SCHEDULER is quite long since it has plenty of time to decide, but the scope of time involved for SHORT TERM SCHEDULER is quite less which is the reason we can afford a complex LONG TERM SCHEDULER.

 

Analysis of 3rd Semester Operating System – Introduction and System Structure

Introduction and System Structure – Questions and Analysis – Operating System, 3rd Semester WBUT BCA

 

By Shritam Bhowmick
Web Application Penetration Tester
LinkedIn: https://www.linkedin.com/profile/view?id=281014248&trk=nav_responsive_tab_profile
Academia: https://independent.academia.edu/ShritamBhowmick
Facebook: https://www.facebook.com/coded32

Last post I went ahead to introduce process management but I really forgot to add the first module. This post will not only cover the module aspect but also answer the questions which would be related to the specific section of Operating Systems – Introduction and System Structure. Having said that one must initially assume the title would spread the post in-depth in itself. Yes, the post is about the introductory part and is consumed with answering analytically deducted questions which were asked basically in previous years before 2014. Since, this would be my take in because none in WBUT BCA did a good job at sharing, I will go ahead with my take at this.

Following are the frequently asked questions which just as well might appear for 2014 WBUT, BCA. This post might also be beneficial not only to the current in-taking the examinations but would also help other students, researchers or examiners in developing or inspiring ‘writing’ what they had already documented on the hard copy. I believe sharing the world would be the utmost priority where  everyone else only hopes.

The questions related with ‘analysis’ (prediction of possibilities of their coming up in year 2014) are as follows:

1.) Differentiate between Logical and Physical Address Space.
2.) What is Operating System? State the importance of Operating System.
3.) Discuss the relationship of Operating System to basic computer hardware. Explain the hierarchy of the Operating System.
4.) Write a Short Note on Device Management and Virtual Machine.

Now there are these objective questions along with answers (a one liner) which I think might help with the Objective based questions:

  1. Which is not a layer of the operating system? Kernel is not a layer of the operating system, others like ‘Shell’, ‘Application Program’, and ‘Critical Section’ are a part of the operating system.
  2. The Operating system is responsible for ‘controlling peripheral devices such as monitor, printers, disk drives, etc. It also helps detecting errors in user programs. It provides an interface which allows users to choose programs to run and to manipulate files. Pretty much everything‘. Anything else would be a wrong answer. The question asked were almost every-time belonged to all of these when objectively asked.
  3. When an interrupt occurs, ”resuming execution of interrupted process after processing the interrupt‘ happens, anything else would be a wrong attempt at answering the question.
  4. What is a Shell? – Shell is a command interpreter, anything else is wrong.
  5. Multiprogramming Systems ‘execute each job faster‘.
  6. Multiprogramming is ‘more than one program executing on a machine‘.
  7. In System mode, machine is executing operating system instructions. So basically it is in system mode that the operating system prefers to execute OS system instructions. Other modes are Normal, Safe, and User.But none of the latter would be correct if objectively asked about executing system instructions.

That been done, we now have very basic touch with the objective part. The questions which were given are as per the subjective analysis and on this analysis, i would be getting the answers in this post. The subjective questions which are predicted to come for year 2014 has already been detailed in this post before, so I would straight away drive to answering them at a go.

1.) Differentiate between logical v/s physical address space.

Answer: First off, let’s dive what really an address space is. To the definition, an address space is the amount of memory allocated for all possible addresses of a computational entity such as for example: a file, a a device, a server, or a networked computer.  An address space may refer to a range of addresses which are available to the processor or available to a process. This range of addresses might be logical or physical.

Now, to answer the second part and differentiate between logical and physical address space, we need to know what are logical and what are physical address space. Logical address are the addresses generated by the CPU. From the perspective of the program that is running, an item seems to be from the address which is logically assigned by the CPU. The user-program never looks down at the Physical Addresses, it always has to refer to logical addresses generated by the CPU. In other words, the logical address space is the set of logical addresses generated by a program. Logical addresses need to be mapped to physical addresses before they are used and this mapping is handled using a hardware device called the Memory Management Unit (MMU). Now, for Physical address space, Physical address or the real address is the address seen by the memory unit and it allows the data bus to access a particular memory cell in the main memory. Logical addresses generated by the CPU when executing a program are mapped in to physical address using the MMU.

The difference between Logical Address Space and Physical Address Space: Logical address is the address generated by the CPU (from the perspective of a program that is running) whereas physical address (or the real address) is the address seen by the memory unit and it allows the data bus to access a particular memory cell in the main memory. All the logical addresses need to be mapped in to physical addresses before they can be used by the MMU. Physical and logical addresses are same when using compile time and load time address binding but they differ when using execution time address binding.

2.) What is an Operating System? State the Importance of the Operating System.

Answer: The low level software which is a collection of programs and utilities and supports basic functions such as scheduling tasks, and controlling peripherals is known as the Operating System. It sits between the user and the hardware, and lets the user use the interface to control the machine and produce or generate output. The operating system manages the I/O operations, handles interrupts,  manages the file system, storage space and additionally provides utilities which could be handy for an user to automate tasks.

Importance of the Operating System: The importance of an Operating System is that is provides the user with a power to create programs, account, execute a program, access files in a controlled way, access additional systems, detect error, and access I/O devices for an automated way of working. Users depend on their Operating Systems to automate tasks which are repetitive in nature for them and also detect, manipulate and quarantine error accordingly without requiring the user to take care of the low level tasks. It interacts with the hardware and allows the hardware to instruct other hardware to progressively execute a job/task and produce useful results as an output.

3.) Discuss the Relationship of operating system to basic hardware. Explain the hierarchy structure of the operating system.

Answer: The basic computer hardware are monitor, the CPU, the keyboard, memory, and other I/O and secondary devices. Operating system manages all of these resources which is the reason it has been also termed as ‘Resource Manager’. The efficiency with which an operating system handles all these resources are remarkable and it also handles the scheduling i.e: which job depending upon their priority must be executed first by the hardware involved and which jobs are to be queued.

The structure of the Operating System is organized the following way:

oslayer

This resembles to the following image which deduces out the functionality associated with each layer:

Layered

 

The application programs are dependent on the users of the operating system. The system program layer consists of computer, assembler, linker, library, routine, etc. The kernel directly interacts with the hardware and provides services such as hardware drivers etc. The kernel comprises of I/O drivers, CPU scheduler, pager, swapper, etc. Altogether, the structure of the Operating system manages the hardware resources in timed and at an efficient manner.

4.) Write a Short Note on Device Management and Virtual Machine.

Answer: Device Management: The operating system has an important role in handling devices and managing them. The devices can be managed by the operating system via three distinct ways:

a.) Dedicated
b.) Shared
c.) Virtual

The dedicated devices are tape drives, plotters, etc. The shared devices are printers, hard drives, etc and the virtual devices are virtual printers (spooling), etc. The status of channels, control units, and devices must be checked by the device management routines embedded into the operating system. Some devices are capable of doing an I/O peration withot any support from the channel or the control units. However most devices require the control unit and the support of channels.

Virtual Machine: Virtual Machine are visualized operating system within the operating system. The virtual machine works the same way a operating system installed on a physical hard disk might work but relies on the virtual machine control program called “VMM”. VMM stands for Virtual Machine Monitor and is responsible to link the virtual machine, often called as the ‘guest operating system‘ to the underlying primary hardware. The hardware partition for the virtual operating system would also be virtual and depends on the VMM. The advantages of visualization are:

a.) Consolidation
b.) Stability
c.) Security
d.) Development Flexibility
e.) Migration

Products like VMware and Oracle’s VirtualBox makes visualization easy.

This post was all about the introductory part of the operating system syllabus for BCA WBUT, 2014. I would come up with the continuation of the posts related to process management since I had been working on them. Related answers to process management in Operating Systems could be found later in this blog. I would first detail them and then come up with analysis (prediction) for the coming 2014 Winter exams. Stay chilled this winter and have a great start of the week ahead. Taking a leave!

 

3. Operating System – CPU Management

CPU Management – A step towards Process Management

 

By Shritam Bhowmick
Web Application Penetration Tester
LinkedIn: https://www.linkedin.com/profile/view?id=281014248&trk=nav_responsive_tab_profile
Academia: https://independent.academia.edu/ShritamBhowmick
Facebook: https://www.facebook.com/coded32

Last post, I talked about the coverage of the series on operating system and the introductory post with the importance of operating system, as well as defining the operating system. In this post, I’d discuss CPU management and hence go closer into process management. The CPU is known by Central Processing Unit and it is hence obvious CPU is a process manager of the computer system. It processes the given data into desired output which is useful for the user. The process which takes place in the CPU should be in the main memory, that is the data which is to be executed via processing should be in the main memory (Random Access Memory or RAM).  A multi-user system uses processes, and a process does not mean a multiple CPU handling these processes, it means that the processes are handled by the same CPU but with optimization in time and processing efficiency. The data provided by the user is fed to the CPU which is a ‘Job’, this Job needs processing to be done and hence result in execution in the CPU, the Jobs require the dependency data to be present in the main memory. Jobs are the tasks which the CPU has to undertake in order to process them, and then execute and write the results to the secondary devices (special I/O devices as discussed in y previous posts) or directly as a output to a video terminal, or printer, etc.

The terms which would be required for CPU management are:

  1. Waiting Time
  2. Turnaround Time

What is Waiting Time?

The gap in time between the given job for execution to the CPU and the actual time when the CPU begins to execute the given job is known as the waiting time. Or in other words the time the processing for the given job to be executed/processed remains in the ready queue for the CPU to begin processing or executing is termed as the waiting time.

Let t(0) be the time the user has given a certain job for processing or execution to the CPU. And let the actual time when the CPU begins processing/executing the given job be t(i). The time difference between t(i) and t(0) i.e: t(i) – t(0) = t(w) {Waiting time} is termed as the waiting time. So the equations are:

T(i) - T(0) = T(w)

Where,

  1. T(i) = Time at which the user had supplied the job for processing or execution to the CPU.
  2. T(0) = Time at which the CPU actually starts processing or execution of the supplied job.
  3. T(w) = Waiting Time.

Now for the CPU to give an output, there must be some time taken by the CPU in order to process the job. This time on a timeline of the whole processing from job given, to job initialization by the CPU and the output could be shown as below:

 

Waiting Time Demonstration

 

As shown, the processed job by the CPU is at the last entry which is marked as T (p), this would be the output time stamp when the job is done by the CPU.  Now, the concept of turnaround time hops in.

What is Turnaround Time?

The time taken for a job to complete it’s processing or execution from the point of the job submission to the CPU until it’s processing has been completed is termed as turnaround time. Or in other words, turnaround time is the total time taken between the submission of a program/process/thread/task  for execution and the return of the complete output to the user.

Now, as we had assumed the time stamp when the job is completed to be T (p), the turnaround time would be as following described by the equation:


T (t) = T (p) - T (0)

Where,

  1. T (t) = Turnaround Time
  2. T (p) = Processed or executed job by the CPU returned to the user or written in the special I/O device or outputted.
  3. T (0) = The time stamp when the user submitted the job/task to the CPU.

Diagrammatically, it would be:

Turnover Time

Having said that, the users would always want the waiting time and the turnaround time to be minimum in order to accomplish processing and outputting of the given jobs. But the CPU handles multi-jobs simultaneously and hence this isn’t possible or realistic. Jobs should be executed in first come first served (FCFS) basis and hence certain waiting time has to be allotted to the incoming jobs. Let’s assign these incoming jobs to be:

J1, J2, J3, J4 which would be incoming from ascending order, i.e: 1st J1 is fed to the CPU, then J2, J3 and finally J4. Hence the processing by the CPU should also be in accordance. That is if FCFS (First Come First Served) method is applied, J1 should be undertaken first and then the rest and hence the processing of J1 will be first then the rest. Applying that:

J1 = 15 time units of execution.
J2 = 8 time units of execution.
J3 = 10 time units of execution.
J4 = 3 time units of execution.

The jobs have been assigned with there respective time units of execution, that is each job is actually undertaken by the CPU at a certain time lapse, which until then it’s a waiting time. Assuming the jobs have arrived to the CPU progressively as J1, J2, J3 and then J4, I draw a general timeline:

 

Calculate Average Waiting Time

 

The above diagram shows the calculation part of the average job wait time. The calculation is simple, ignore the numbers and because it’s FCFS, start from ‘0’ for the Job 1 since Job 1 never waited, the others were in queue. The numbers given are for execution time that is the time interval the job took to get over with execution and hence the waiting time for each would be calculated as:

W (j1) = 0
W (j2) = 0 + 15 = 15
W (j3) = 15 + 8 = 23
W (j4) = 23 + 10 = 33


Total Wait time = 71
______________

The last 3 units of time wasn’t a wait time, it was the time taken for the 4th job to get over with execution, the 4th job already waited for 33 units of time for it’s turn to get execution scheduled. This process undertaken by the CPU is called Job Scheduling or simply Scheduling. That comes deep later, but to calculate the average waiting time, I need to consider all the jobs which are 4 items so:

71/4 = 17.75 is the average waiting time i.e: T (wa). Hence, T (Wa) = 17.75 time units for the above example on wait time average. Now, changing the sequential order of the incoming jobs. The CPU has no control over the incoming of the jobs, but what the CPU has control of is the order in which the jobs will be given a priority for execution. This could be decided by the CPU. So, consider the order below for jobs: (considering the same queue of jobs, the situation here is re-scheduling the jobs

J4 = 3 units of time
J2 = 8 units of time
J3 = 10 units of time
J1 = 15 units of time

In this case, the execution time remains the same, however the jobs are re-shuffled according to the CPU’s preference and the priority in taking up the job. Previously, the jobs were taken first come first served basis, but here even though the jobs arrived sequentially, the CPU has the authority and control of the execution land the processing of the jobs which were given by the user. The CPU decides to execute the job in the order such that J4 was executed first, next was J2, and then J3 and finally J4 was executed.

Considering the above situation, I now wanted to determine the average waiting time for such a schedule prescribed by the CPU because the CPU has the control on the scheduling part but does not have the control over the incoming jobs. To draw a rough sketch of just occurrences with a job execution and waiting timeline:

 

Scheduling without FCFS. Rescheduling with CPU’s control over Scheduling

 

Considering the image below I had drawn (MS paint, mind not!), the time taken for execution of the jobs in previous scenario were the same, but the scheduling was done by the CPU without FCFS method but taking in priority to which jobs has to be executed first to perform optimization. The magic happens here. Now if we calculate the waiting time:

W (J4) = 0 (since it arrived first)
W (J2) = 0 + 3 = 3 (since J2 had to wait for 3 units of time because 3 units of time were wasted {rather not!} in execution of job J4)
W (J3) = 3 + 8 = 11 (since J3 had to wait until J2 was finished executing with time lapse of 8 units)
W (J1) = 11 + 10 = 21 (since J1 had to wait for J3 to finish up with execution with time lapse of 10 units)

Hence;

W (J4) = 0
W (J2) = 3
W (J3) = 11
W (J1) = 21


Total Wait Time = 35 time units
________________________________

Now, if I calculate the average waiting time T (Wa) for this, that is 35/4, it comes to 8.75 time units. So:

T (Wa) = 8.75

which is far lesser in waiting time than the previous scenario. Calculating the optimization which had occurred here and the difference, our previous FCFS achieved 17.75 of waiting time, and this scheduling by the CPU achieved 8.75. The difference in wait time optimization being:

17.75 - 8.75 = 9.00 time units

A budget of total 9 time units.  So we find that just by changing the order, the same jobs were optimized. The average waiting time were reduced drastically so naturally the CPU would now prefer the second type of scheduling rather than the first scheduling. The user wold always prefer the first scenario but the CPU won’t allow that because it has to follow optimization rules. In the second scenario, there is a pattern why the average waiting time were drastically reduced. This is achieved by executing the jobs first which take lesser time for execution because the next queue of jobs will have to ‘wait’ lesser and hence optimization could occur.There are various objectives which could be achieved, the objective achieved here was to minimize the average waiting time for the jobs together to be finished in an optimized way.

So that’d be the very touch towards process management in this post. Next post, I’d bring some more fruits to the blog. The concepts of process management is vast and does not end with this basic post.  To keep updated if you like these posts, subscribe via mail or keep visiting the blog. Cheers out!

2. Operating System- Importance of Operating System

The Importance of an Operating System

By Shritam Bhowmick
Web Application Penetration Tester
LinkedIn: https://www.linkedin.com/profile/view?id=281014248&trk=nav_responsive_tab_profile
Academia: https://independent.academia.edu/ShritamBhowmick
Facebook: https://www.facebook.com/coded32

Before I get to the importance of the operating system, I must define what an Operating system is, the standard definition of an operating system and then deduce it’s importance respectively. In order to define operating system, one must understand that there is a distinction between software components and hardware components of a computer. The computer itself cannot accomplish everything, it needs support from the hardware as well as support fro the hardware in order to accomplish the task it has been assigned to do. This applies both to super computers, personal computers, Macs and any devices which fall in the realm of being called as a computer, including basic calculators, watches, mobile devices and any other gadget which use hardware components as well as pre-made integrated software components. Different files which are inter-related and accomplish a certain set of task makes up the operating system. These files are system level files which does scheduling, interrupting, data transferring and managing the flow of data and is low level software component of the computer itself. And hence he standard definition of an operating system would be:

An operating system is a set of program files which controls the resources of the computer system and allows the communication of hardware components of a computer to the software components of the computer system. Operating systems provides access to computer services which is possible only via working of both the hardware components and the software components.

The operating system is like a resource manager. It handles decision making and interruption. It manages the time for tasks to occur. So, what are the resources of the computer system?

  1. CPU – Central Processing Unit of the computer system – performs execution of a program operation. The complied program or the executable program has to be executed by the CPU to give the users the desired result. The program is taken from the secondary storage drive to the main memory. The CPU cannot access directly the secondary storage memory. The program must be brought from the secondary memory to the main memory for execution. Even the data to be processed has to be brought from the secondary memory to the main memory. The data which is to be executed must reside in the main memory for execution and hence cannot reside in the secondary memory for execution. Therefore first resource for the computer system is the CPU, the central processing unit.
  2. The Main Memory – When a piece of program has to be executed by the CPU, this data or program must reside in the main memory, hence if there is a multi-user system where more than one user program has to be executed simultaneously. To manage these programs for more than one user is a required responsibility of the operating system.
  3. Secondary Storage – This consists of the hard disks, magnetic tapes, floppy disks, CD-ROM drives and USB storage devices. The way the main memory is accessed and the way in which the secondary storage are accessed are different. There are different types of secondary storage as discussed above and hence the characteristics of these storage are different and never uniform. Accordingly, the different secondary storage devices are to be accessed are different. Main memory is accessed directly by the CPU, but secondary storage are not accessed directly by the CPU. For the CPU to access the secondary memory in case required data has to be fetched into the the main memory for execution of a certain program, the CPU has to consult or instruct the device driver. The device driver would look for the required files, data or programs which the CPU needs for execution, fetch it and bring them back to the CPU for such an execution to occur. Hence the way main memory is accessed and the way the secondary storage is accessed are different.
  4. Input/Output Devices (I/O devices) – These are the devices through which the user would feed the data, write a data and will interact physically such as keyboard, mouse, etc. These devices are the input devices. When the user wants to get the data as a output in visual form, such as video terminal, a printer, etc. All of them are output devices. The secondary storage could also be considered as a I/O device because a data file processed normally resides in the secondary storage, when a program is executed, the program opens the data file which is stored in the secondary storage device, reads the data file, and after reading it closes the file which means it also inputs the data but not through the keyboard but through the hard-disk or the secondary storage. Similarly when processing data, the output is stored in the secondary storage. So considering this, the secondary storage has to be considered as a special type of input/output devices.

So, there are majorly four types of resources which the computer  system has. These are as discussed above, namely, the CPU, the main memory, the secondary storage and the input/output devices. The main memory and the secondary storage are types of memory modules and could be termed as ‘memory devices’ which the computer system has the responsibility to manage as ‘memory management’. Similarly, the computer system has to manage the secondary storage and the input/output devices and the secondary storage is treated as special kind of input/output components. The computer system requires to efficiently manage all these resources for providing time to multiple users or clients. These resources are confined in a single system but in modern operating system this isn’t the case. The advancement of information distribution. When a multiple computers are considered with interconnection with the help of networking can hence accomplish distributed working in order to give relevant information to the users. Even if the network fails, the isolated computers would still work. Information could be transferred from one computer to a different computer at a different geographical location through networking and hence distribution of information, the concept of distributed computing system comes.

 

Distributed Computer System

 

Advantages of distributed computer systems:

  1. The information storage required is minimal because all the information is not required to be in single computer system. It could be distributed among different computers. The information is broken to number of pieces and stored on different isolated computers which are inter-networked.
  2. Having such a distributed computer environment prevents duplication of information. It prevents multiple files (data duplication) which isn’t desired. This increases the efficiency. Hence efficient utilization of resources. The modern operating systems caters this efficiency management.

Hence the responsibilities of the operating system includes controlling the hardware components, the resources which are provided to the computer system, to routinely schedule the tasks carried out by the computer system and hence along with merging the functions of hardware and software provide computer based services to the users of the computer system. Now, that we are aware of certain responsibilities of the operating system, we must enlist some of the important aspects of an operating system. They are as follows:

1. It moderates the relationship between the computer and its peripherals.

2. It helps in the management of files,- copying, deleting, moving of files from one storage location to the other.

3. It encourages the memory for its efficient usage and thus adding the speed of the computer.

4. It manages the activities of the processor in terms of job execution according to the priority of arrival ,of jobs.

5. It informs the user of nay hardware or software error.

6. It makes communication between the computer and the user or the operator possible.

(Reference: http://thecomputersystem.blogspot.in/2009/06/importance-of-operating-system.html)

In order to understand this perspective more deeper, consider reading the next concurrent posts about operating system. The posts would include process management, cache memory, deadlocks and the coverage I had put in the previous post. I shall take a leave and the next post would be on CPU management.

 

1. Operating System – An Introductory Post on coverage.

An Introduction to Operating Systems

By Shritam Bhowmick
Web Application Penetration Tester
LinkedIn: https://www.linkedin.com/profile/view?id=281014248&trk=nav_responsive_tab_profile
Academia: https://independent.academia.edu/ShritamBhowmick
Facebook: https://www.facebook.com/coded32

Hi, this semester I had something new to discuss. The subject was Operating System, and I find it a pleasure writing all of the achieved knowledge through this blog post. This post would be the introductory post since there is a lot to cover. And because there is a lot to cover, here I would go in accordance with clear mentionables and hence cover only the important aspects. The coverage will encircle all the syllabus. To let you know the university (WBUT 2014) has prescribed the following as the syllabus:

  1. Importance of OS.
  2. Basic concepts and terminology
  3. Types of OS
  4. Different views
  5. Journey of a command execution
  6. Design and implementation of OS
  7. Process: Concept and views
  8. OS view of processes
  9. OS services for process management
  10. Scheduling algorithms,
  11. Performance evaluation
  12. Inter process communication and synchronization
  13. Mutual exclusion
  14. Semaphores
  15. Hardware support for mutual exclusion
  16. Queuing implementation of semaphores
  17. Classical problem of concurrent programming
  18. Critical region and conditional critical region
  19. Monitors
  20. Messages
  21. Deadlocks
  22. Resource manager
  23. Memory management
  24. File management
  25. Processor management
  26. Device management
  27. Security and protection
  28. Authentication, protection and access control
  29. Formal models of protection
  30. Worms and viruses.
  31. Multiprocessor system, classification and types
  32. OS functions and requirements
  33. Introduction to parallel computing
  34. Multiprocessor interconnection synchronization.
  35. Distributes OS – rationales
  36. Algorithms for distributed processing
  37. Introduction to Unix OS/DOS (case study)

All of the above topics are a standard evaluation of the syllabus and the topics marked are to be covered. The first topic is importance of OS and shall be discussed from the next post. It has come to my attention that universities rather Indian Universities are not much conceptual on drafting out syllabus. They care only to finish a heap of things without any real conclusion on the achieved skill-set. However, I choose to go the other way round and make things better with my own research and dedication because I am interested in such topics and college degree doesn’t matter much to me. The degree which is fetched might be useful for industrial jobs, but these jobs are often versatile on different things and the employee themselves need to be trained, fed, kicked and reduce the possibility of invention. This in turn re-invents the wheel of ultimate progression and limits the imaginary thoughts which could be turned into an invention and discovery.

As for this post is concerned, the coverage of the syllabus laid down by WBUT has been covered and hence I shall proceed to the next post in continuation to this post on the topic for ‘Importance of OS’. Because without really knowing why we need operating system, we would really be lost onto the topics. Feel free to navigate along the ‘categories’ to discover new contents added to ‘Operating Systems’. I shall post when time permits me and would share the additional resources if available any. I had planned to cover all the topic in each post for these to go ahead and leave a mark on a good outline coverage of all the topics. Probability of the questions asked and their answers will be also covered. An analysis from the previous year questions would also be sorted out and hence provide a flexible way to deduce the questions for the upcoming years. Since the year of writing these posts is 2014, it is the duty for other academic students to share the resources and these posts to others who shall be in need and had opted for the same university and has these topics to be covered. I shall take a leave now from this post and move along to the next posts which are due to come.

 

 

ISAD – Information System Analysis and Design.

So, the bitter truth is I really had not been prepared for ISAD, a part of BCA academic course. I had to go check what had been on the syllabus to make myself accented towards ISAD. I found that most of the topics had been summoned from the earlier dose of business system and applications. I quickly researched some of the topics I could study and mywbut.com came very handy with resourceful of materials. I knew I could get some of the topics fetched from this site which prepared the notes already. I am just pushing my information on the topics I know about because not all of the contents could be grabbed from this site alone. There are other resources as well, I will share them after I finish off my content on this. This post is to supposed be updated when I am updated. Again, you cannot get ‘all’ on this post itself. If you are specifically doing your academic kung-fu, this is not the best place. I write my research and personal papers here which might be found useful at some point. This doesn’t mean you end and hopefully do your part.

There are these topics that I see i can get my hand as on :

  1. Software Development Life Cycles – All those old bees. Namely Waterfall model, Spiral model, Incremental and you name it! there would be more…
  2. Feasibility Studies. – I am really not aware of this. What this was suppose to mean? I have to research, and let’s see what I get!
  3. Requirements Determination – em, I can just imagine there are requirements of some sort… I had been on pentesting; this is not majorly what I do (academics!).
  4. Logical Design – Falls in System Design, we will get on all that later.
  5. Physical Design – Falls in System Design.
  6. Program Design – In System Design.
  7. Risk and Feasibility Analysis – .. [redacted]

This post has been redacted from continuation due to time lack (the post was maintained for further study but due to the lack of time, the post is discontinued ..). See you another time, or catch the section which are available. Some good resources to look into are:

Tricks to get maximum:

http://qpaper.wbut.ac.in/

Tricks to get paper and the notes:

http://www.mywbut.com/department.php?mode=SM&deptid=12&semid=2

You’ll need registration although. You might also find this beneficial:

http://www.bcanotes.com/Sad.html

But I doubt the above link would be much beneficial. If that doesn’t work, get here:

pwntoken.wordpress.com

because if I change my mind and find some spare time, I’d share it here. This can/will happen if motivated enough. Go through my other posts on 3rd semester if needed. All the 2nd semester posts are redacted due to lack of maintenance and time management.