Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Table of contents
1.
Introduction
2.
Problem statement
3.
The Brute Force Approach
3.1.
Code in C++
3.2.
Input
3.3.
Output
3.4.
Space Complexity
3.5.
Time Complexity
4.
The Optimized Approach
4.1.
Code in C++
4.2.
Input
4.3.
Output
4.4.
Space Complexity
4.5.
Time Complexity
5.
FAQs
6.
Key Takeaways
Last Updated: Mar 27, 2024
Easy

SEAD Problem (Codechef)

Author SHIKHAR SONI
0 upvote

Introduction

The article discusses and explores the ‘SEAD’ problem on CodeChef. It aims to help you understand binary search beyond just searching elements in sorted arrays. The ‘SEAD’ is a good problem to understand various ways to use binary search to speed up your solution. Binary search is a fundamental algorithm that we usually learn in the intro to programming course. Being acquainted with its variations can help us speed up various problems, making the difference between AC and TLE.

Problem statement

In the problem you have an integer array (of size n) where the elements of the array are sorted in non-decreasing order, i.e., a1 <= …. ai <= ai+1 <= ai+2 …. <= an. You are given two integers, d and t for each query and you are asked to find the smallest integer l such that the following conditions are satisfied al + d >= al+1, al+1 + d >= al+2 and so on until ar-1 + d >= ar and ar <= t, ar+1 < t (if ar+1 is present in the array).

The input format will be - 

Line 1 contains a single integer n (denoting the size of the array)

Line 2 contains the array elements (each separated space)

Line 3 contains a single integer q (total number of queries)

Next, q lines contain two space-separated integers, t and d (as defined earlier).

Input constraints:

1 <= n, q <= 105

1 <= ai, t, d <= 106

Also read, Euclid GCD Algorithm

The Brute Force Approach

In this approach, we think about getting the right answer in the simplest way. A simple way would be to traverse through the array elements on each query which means each query will cost O(N) time, and for the whole program in total, we end up iterating over the array again and again for each query.

Here, we traverse the array for each query, find the r corresponding to it by searching for the first element from the left such that a[r] >= t, then we traverse back from the index r until a[p] + d < a[p+1] (p < r).

Code in C++

#include<bits/stdc++.h>
using namespace std;


#define int long long
#define FAST_IO ios::sync_with_stdio(0); cin.tie(0);


void solve(){
    int n; cin >> n;
    vector<int> a(n);
    for(auto &x: a){
        cin >> x;
    }
    int q; cin >> q;
    int t, d;
    while(q--){
        cin >> t >> d;


        // first we find the end-point R
        int r = n - 1;
        for(int i = 0; i < n - 1; i++){
          if(a[i + 1] > t){
            r = i;
            break;
          }
        }


        // now we traverse back from r as long a[p] + d < a[p + 1], p < r
        int l = r;
        for(int p = r - 1; p > -1; p--){
          if(a[p] + d < a[p + 1]) break;
          l = p;
        }
        cout << l + 1 << endl;
    }
}


int32_t main() {
    FAST_IO
    int t;
    t = 1;
    while(t--)
        solve();
}

Input

5
1 2 3 10 50
6
1 1
5 3
11 7
100000 1
1000000 1000000
11 6

Output

1
1
1
5
1
4

Space Complexity

Other than the input, our program takes constant extra space, i.e., an O(1) extra space.

Time Complexity

The time complexity of our program is O(n * q). Here, in the worst case, we may be traversing almost the whole array for every query.

The Optimized Approach

Our earlier brute-force approach didn’t work for the constraints of the problem. We’ll be improving on the previous approach using binary search.

We need to speed up the computation for each query. We can notice that for each t, there will exist only one index in the array such that a[i] <= t and a[i] > t. The array is also sorted, so we can use binary search to find the index i such that (a[i] <= t and a[i] > t) and that will be the endpoint r.

Now the task is to find the smallest l. Here again we notice that if l satisfies a[l] + d >= a[l+1] (l < r), then by the problem definition a[l+1] + d >= a[l+2], (l + 1 < r). 

So we essentially have to find the leftmost index i <= r such that a[l] + d >= a[l+1] (l < r) and because everything to the right of l till r satisfies the similar property. We can use the binary search here because the behaviour of this property is monotonic. We binary search from 1…r and find the leftmost element satisfying the property. Using the binary search can reduce our time to O(log2(n)). This approach is much better and will pass the TL (time limit) for this question.

The process for using binary search in finding l will be as follows:

  • We construct an array b with size one less than a. 
  • Each element b[i] = a[i+1] - a[i], this is because when we rearrange the property a[p] + d >= a[p+1], we obtain a[p+1] - a[p] <= d, so this implies that the difference between adjacent elements is always less than equal to d for all elements from [l, r-1]. 
  • The final monotonic function turns into a simple problem of finding the max element in b in the range [i, r-1] and printing the smallest i for which the maximum element is less than d. Finding the minimum in a range in O(log2(n)) and even O(1) (for an array that doesn’t change) is possible with a segment tree or a sparse table. 
  • We won’t be using segment trees in our implementation but that solution will also work within the TL. However, the sparse table is ideal for this problem as there are no point/range updates to be done, and it also takes O(1) time for each query while the segment tree takes O(log2(n)) time.
     

The step by step approach is as follows:

  1. For each query, find r using binary search. Here you can use STL and find the upper_bound. In the below implementation, you’ll find that we have used binary search for the same instead of the built-in STL function. Both the methods are identical. However, writing the binary search can help remove your fear of writing a binary search solution if you have one. You can ensure that you write binary search without going into infinite loops (this can happen if you don’t handle edge cases well or have inadequate practice).
  2. We then have to check for the property being true by creating another array b, where b[i] = a[i+1] - a[i] for all i.
  3. After creating the array, we construct the sparse table on array b.
  4. In the end, we do a binary search on indexes in the closed range [0, r-1]. The binary search is done to obtain the smallest element i such that the max of all elements in b in the range [1, r-1] is less than or equal to d.
  5. The obtained i is our l and also our final answer. We print and continue the same process for other queries.

Code in C++

#include<bits/stdc++.h>
using namespace std;


#define int long long
#define FAST_IO ios::sync_with_stdio(0); cin.tie(0);


template<typename Node>
struct SparseTable {
 vector<vector<Node>> table;
 vector<int> logValues;
 int n;
 int maxLog;
 vector<int> a;
 SparseTable(vector<int> &arr) {
  n = arr.size();
  a = arr;
  table.resize(n);
  logValues.resize(n + 1);
  maxLog = log2(n);
  logValues[1] = 0;
  for (int i = 2; i <= n; i++) {
   logValues[i] = logValues[i / 2] + 1;
  }
  for (int i = 0; i < n; i++) {
   table[i].resize(maxLog + 1);
   fill(table[i].begin(), table[i].end(), Node());
  }
  build();
}
void build() {
 for (int i = 0; i < n; i++) {
  table[i][0] = Node(a[i]);
 }
 for (int i = 1; i <= maxLog; i++) {
 for (int j = 0; (j + (1 << i)) <= n; j++) {
  table[j][i].merge(table[j][i - 1], table[j + (1 << (i - 1))][i - 1]);
 }
 }
}
Node queryNormal(int left, int right) {
 Node ans = Node();
 for (int j = logValues[right - left + 1]; j >= 0; j--) {
  if ((1 << j) <= right - left + 1) {
   ans.merge(ans, table[left][j]);
   left += (1 << j);
  }
 }
 return ans;
}
Node queryIdempotent(int left, int right) {
 int j = logValues[right - left + 1];
 Node ans = Node();
 ans.merge(table[left][j], table[right - (1 << j) + 1][j]);
 return ans;
}
};
struct Node1 {
 int val;
 Node1() {
  val = 0;
 }
 Node1(int v) {
  val = v;
 }
 void merge(Node1 &l, Node1 &r) {
  val = max(l.val, r.val);
 }
};


void solve(){
    int n; cin >> n;
    vector<int> a(n), b(n - 1);
    for(auto &x: a){
        cin >> x;
    }
    for(int i = 0; i < n - 1; i++){
        b[i] = a[i + 1] - a[i];
    }
    SparseTable<Node1> sp = SparseTable<Node1>(b);
    int q; cin >> q;
    int t, d;
    while(q--){
        cin >> t >> d;
        // first we find the end-point R
        int l = 0, r = n - 1;
        while(l < r){
            int mid = l + (r - l + 1) / 2;
            if(a[mid] <= t){
                l = mid;
            }
            else{
                r = mid - 1;
            }
        }
        // R found and promptly set
        int R = r;
        // resetting l = 0
        l = 0;
        while(l < r){
            int mid = l + (r - l) / 2;
            if(sp.queryIdempotent(mid, R - 1).val <= d){
                r = mid;
            }
            else{
                l = mid + 1;
            }
        }
        cout << l + 1 << endl;
    }
}


int32_t main() {
    FAST_IO
    int t;
    t = 1;
    while(t--)
        solve();
}

Input

5
1 2 3 10 50
6
1 1
5 3
11 7
100000 1
1000000 1000000
11 6

Output

1
1
1
5
1
4

Space Complexity

Other than the input, our program takes O(n*log2(n)) extra space to construct our sparse table from the b array.

Time Complexity

The time complexity of our program is O(log2(n) * q). With binary search, we consume only O(log2(n)) operations for finding r and later O(log2(n)) operations for finding the l (our final answer).

Read about Bitwise Operators in C here.

FAQs

1. Why is mid = l + (r - l) or mid = l + (r - l + 1) / 2 used instead of simple (l + r) / 2 in various solutions using binary search?

Generally, this is done to prevent overflow, but it also helps to correctly round values up/down depending on the requirement when l or r is negative.

2. What is a sparse table?

A sparse table is a data structure that’s very easy to implement. It’s particularly useful in answering certain queries in O(1) time for a static array. The O(1) time approach is, however, limited to only idempotent functions, and for normal functions, it again takes O(log2(n)) time.

3. Why can’t we use a Fenwick tree?

It is actually possible to do this question with the same time complexity using the Fenwick tree. Still, it’s not usually considered because the method to do so requires a bunch of changes to our known Fenwick tree, making it harder (harder than segment tree) unless you know what you are doing. It’s also not usually present in the libraries of most coders.

4. What are idempotent functions?

Idempotent functions are the ones that can be applied multiple times, but the result of the operation doesn’t change beyond the initial application. It helps to also understand one more property that if you have an array X and ranges [a, b] and [c, d], here b > c and a < c, then F(F(X[a]...X[b]), F(X[c]...X[d])) = F(X[a]....X[d]) (F is an idempotent function).

5. What are some examples of idempotent functions?

Examples of idempotent functions are abs, max, min, gcd, lcm, bitwise AND, bitwise OR, etc. On the other hand, addition, subtraction and bitwise XOR are not idempotent functions.

Key Takeaways

In this article, we have extensively discussed the ‘SEAD’ problem from CodeChef and gave the brute force and an optimized solution for the problem with the C++ code for both. Refer to Fenwick Tree RMQ to learn more about their use (for this specific problem, we recommend using a sparse table only).

Learn more about RMQ in sparse tables and segment trees from our blog collection with detailed explanations.

Refer to our guided paths on Coding Ninjas Studio to learn more about DSA, Competitive Programming, JavaScript, System Design, etc. 

Enrol in our courses and refer to the mock test and problems available.

Take a look at the interview experiences and interview bundle for placement preparations.

Do upvote our blog to help other ninjas grow. 

Happy Coding!

Live masterclass