Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Table of contents
1.
Introduction
2.
Function Preserving Optimization
2.1.
1. Common Subexpressions
2.2.
2. Copy Propagations
2.3.
3. Dead Code Elimination
2.3.1.
Code Motion
2.4.
4. Folding
3.
Loop Optimizations
3.1.
1. Algebraic Expression Simplification
3.2.
2. Induction Variable and Strength Reduction 
3.3.
3. Frequency Reduction
3.4.
4. Redundancy Elimination
4.
Frequently Asked Questions
4.1.
How does machine-independent code become independent?
4.2.
What is a machine-independent language?
4.3.
How many phases are there in Compiler Design and in which phase Machine Independent Optimization is used?
4.4.
What is folding in compiler design?
4.5.
What is the difference between Machine Dependent and Machine Independent Code Optimization?
5.
Conclusion
Last Updated: Mar 27, 2024
Easy

Machine Independent Optimization

Author Divyansh Jain
1 upvote
Leveraging ChatGPT - GenAI as a Microsoft Data Expert
Speaker
Prerita Agarwal
Data Specialist @
23 Jul, 2024 @ 01:30 PM

Introduction

Machine-independent optimization's major goal is to improve the generated intermediate code so that the compiler can generate better target code.

Compiler Design
Code improvement or code optimization is the process of removing unneeded code from object code or substituting one piece of code with another to make the object code faster without changing the output of the object code.

Illustration Image

Function Preserving Optimization

Function Preserving Optimization (FPO) is a technique in compiler design that aims to transform a program while preserving its functionality. The goal is to improve the program's performance without altering its behavior.

FPO achieves this by applying optimizations that do not change the output produced by the program. These optimizations can involve transformations like constant folding, loop unrolling, and common subexpression elimination. By applying FPO, compilers can generate more efficient code that executes faster and uses fewer resources.

The methods for achieving it can be followed -

  1. Common Subexpressions
  2. Copy Propagations
  3. Dead Code Elimination
  4. Folding

1. Common Subexpressions

A common subexpression is one that has been computed and does not change after the last computation but is frequently repeated throughout the program. Even if the value does not change, the compiler assesses it. This kind of review wastes both resources and time. As a result, it should be removed. Consider the following scenario:

Before Optimization 

s1 = a + b ;
s2 = s1 % k ;
s3 = a + b ;
s4 = s1 % k ;

 

Here in the above code s1 and s3 are similar and s2 and s4 are similar. Hence on repetition, it must be eliminated. Now the optimized code is as follows

s1  = a + b ;
s2 = s1 % k ;

2. Copy Propagations

In circumstances where assignments of the form a=b are used, Copy Propagation proposes using one variable instead of the other. Copy statements are used in these assignments. Instead of assigning b to a, we can use it wherever it is needed. In a nutshell, Copy Propagation is the process of removing duplicates from a program.

For example,

a = c ; 
d = a + b; 

In the above code assigning c to a does not make any useful sense hence we can replace a to c. 

d = c + b; 

3. Dead Code Elimination

When a program snippet is never used in a program it is removed from the program without affecting the remaining program. This type of elimination is referred to as the dead code Elimination. 

Before Optimization

int b = 24; 
int c = b - 17; 
int a = b;
printf(“%d”,b); 

In the above code the integers c and a are never used hence they should be eliminated using the dead code elimination. The optimized code is as follows.

int b = 24; 
printf(“%d”,b);

Also See, Top Down Parsing

Code Motion

In this mode of optimization of code, the statements that remain the same for every iteration and are included in the loop make the code slow as they run every time an iteration takes place. These types of statements can be excluded from the loop and written outside the loop which reduces the time spent in the loop. To understand these let's take a look at an example. 

for(i = 0; i < (q + p); i++){
a = 15; 
printf(“%d\n”,i);
}

In the above code the statement (a = 15) and the condition (i<(q+p)) runs each time loop runs and remains same every time hence it can be excluded to optimise the time of the loop.

a = 15;

t = q + p 

for (i = 0; i < t ; i++){

printf(“%d\n”,i);

}

4. Folding

Folding, in the context of FPO (Function-Preserving Optimization), refers to a technique that simplifies constant expressions within a program without altering its functionality. This optimization reduces redundant calculations and improves program efficiency.

Here's how folding works:

Identifying Constant Expressions: The compiler identifies expressions that always evaluate to the same value, regardless of the program's input. These expressions typically involve constants, variables with assigned values, and basic arithmetic operations.
Simplification: The compiler replaces the identified constant expression with its pre-calculated result. This eliminates the need to re-evaluate the expression during program execution.
Example:

Consider the following code snippet:

int result = 2 * 5 + 3;
result = result + 1;
// After folding:
int result = 13; // (2 * 5 + 3) is pre-calculated and stored in result directly.

In this example, the expression 2 * 5 + 3 always evaluates to 13. Folding replaces this expression with its result (13) in both occurrences. This eliminates two redundant calculations and simplifies the code. Folding can be applied to various constant expressions, including:

  • Simple arithmetic operations (addition, subtraction, multiplication, division)
  • Boolean expressions (true or false)
  • Comparisons (equal to, greater than, less than)
     

By applying folding and other FPO techniques, compilers can generate more efficient code that executes faster and reduces processing time.

Get the tech career you deserve, faster!
Connect with our expert counsellors to understand how to hack your way to success
User rating 4.7/5
1:1 doubt support
95% placement record
Akash Pal
Senior Software Engineer
326% Hike After Job Bootcamp
Himanshu Gusain
Programmer Analyst
32 LPA After Job Bootcamp
After Job
Bootcamp

Loop Optimizations

Loop optimizations are a crucial aspect of FPO (Function-Preserving Optimization) that focuses on improving the performance of loops within a program. These optimizations aim to reduce the number of instructions executed and streamline loop execution without affecting the overall program behavior. By optimizing loops, compilers can significantly improve the program's speed and efficiency.

Loop Optimization techniques can be applied following the detection of loops:

  1. Algebraic Expression Simplification
  2. Induction Variable and Strength Reduction 
  3. Frequency Reduction
  4. Redundancy Elimination

1. Algebraic Expression Simplification

A program may include some simple algebraic expressions that produce no relevant computations or changes in value. Such lines of code can be removed to save the compiler time when evaluating it.

s = s * 1 ; 
p = p + 0 ; 

These statements don't produce any useful results. Such code may appear to be safe, but it is assessed by the compiler every time it is used inside a loop. As a result, it's advisable to get rid of them.

2. Induction Variable and Strength Reduction 

When there is a high-strength operator present in a program we can replace it with a low-strength operator in order to simplify the process of compiling. 

Operation of multiplication can be reduced to a right binary shift if possible which makes our code easier to compile. One such example is given below. 

Before Optimization

a * 16; 

After Optimization

a << 4; 

In the above example, binary evaluation is much easier for the compiler to calculate than multiplication and hence it reduces the time taken by the compiler to compile it.

3. Frequency Reduction

Frequency reduction, in loop optimization, aims to minimize the number of times a particular operation is executed within a loop. This is achieved by identifying instructions that can be hoisted out of the loop, meaning they are executed only once before the loop begins, rather than repeatedly inside the loop.

Example:

Consider the following code:

for (int i = 0; i < 10; i++) {
  int divisor = 2; // This calculation is repeated in each iteration
  result = value / divisor;
}

 

In this example, calculating the divisor (2) inside the loop is redundant. Frequency reduction would optimize this by:

Moving the divisor calculation before the loop:

int divisor = 2;
for (int i = 0; i < 10; i++) {
  result = value / divisor;
}

 

 

This reduces the number of calculations within the loop, improving efficiency.

4. Redundancy Elimination

Redundancy elimination identifies and removes unnecessary calculations within a loop that produce the same result multiple times. This optimization focuses on eliminating redundant expressions that are re-evaluated in each iteration.

Example:

Consider the following code:

for (int i = 0; i < 10; i++) {
  int x = i * 5;
  result = result + x;
}

 

Here, the expression i * 5 is calculated in each iteration and then added to result. Redundancy elimination would optimize this by:

Calculating i * 5 outside the loop and storing it in a temporary variable:

int temp = i * 5;
for (int i = 0; i < 10; i++) {
  result = result + temp;
}

 

This eliminates the redundant calculation within the loop, improving performance.

Frequently Asked Questions

How does machine-independent code become independent?

A C program designed for one piece of hardware can be compiled and run on any other piece of hardware with little to no modification. Even if a change is necessary, it will only affect platform (OS) functions and system calls. The term "machine-independent" refers to this.

What is a machine-independent language?

A language that can run on any machine is known as a Machine Independent language. Java is a good illustration of this. The Java Virtual Machine, or JVM, can take compiled code for any Java application and run it on the machine you're trying to run it on.

How many phases are there in Compiler Design and in which phase Machine Independent Optimization is used?

There are 6 phases. 
1)Lexical Analysis
2)Syntax analysis
3)Semantic analysis
4)Intermediate code generator
5)Code optimizer
6)Code generator
Machine Independent code optimization attempts to improve the efficiency of intermediate code by transforming a section of code that does not involve hardware components such as CPU registers or absolute memory location.

What is folding in compiler design?

Constant folding is an optimization technique that gets rid of expressions that calculate a value that can be calculated before the code is executed. These are usually calculations or phrases that only relate to constant values or variables with constant values.

What is the difference between Machine Dependent and Machine Independent Code Optimization?

Machine Dependent Machine Independent
Machine-dependent code optimization is applied to object code. Machine-independent code optimization is applied to intermediate code.
Machine-dependent optimization involves CPU registers and absolute memory references. Machine-independent code optimization does not involve CPU registers or absolute memory references.

Conclusion

In the above session, we have gained knowledge of the ways in which a compiler optimizes the code using the techniques like Dead Code Elimination, Copy Propagation, Common Subexpression, Code Motion, Induction Variable, and Strength reduction. Using these techniques we will be able to compile faster and in an easier way. 

Recommended Reading:

You can also consider our Online Coding Courses such as the Machine Learning Course to give your career an edge over others.

Do upvote our blog to help other ninjas grow.

Happy Learning Ninja :)

Live masterclass