Problem:
Given a number N (1 <= N <= 10 ^ 9), find the number of ways to get a sum equal to N by adding consecutive odd numbers.
Note: Consecutive odd numbers are the set of numbers where each number is not divisible by 2, and the difference between any two consecutive numbers is 2. For eg: [1, 3, 5], [15, 17, 19, 21] are valid sets and [3, 7, 9], [2, 3, 5] are not valid.
Example -
Input: N = 63
Output: 3
Explanation: There are three sets of consecutive odd natural numbers that give sum = 63:
- [3, 5, 7, 9, 11, 13, 15]
- [19, 21, 23]
- [63]
Therefore, the number of ways to get a sum equal to N is 3.
Input: N = 9
Output: 2
Explanation:
The valid sets of consecutive odd natural numbers are:
- [1, 3, 5]
- [9]
Therefore, the number of ways to get a sum equal to N is 2.
Also see, Euclid GCD Algorithm
Naive Approach
Let us call the endpoints of the set of consecutive odd numbers L and R, where ‘L’ is the left endpoint and ‘R’ is the right endpoint. Now we have to check if the sum of odd consecutive numbers between ‘L’ and ‘R’ (inclusive) is equal to the ‘N’ or not. Note that both ‘L’ and ‘R’ will be odd.
Recall the basic mathematical formula - the sum of the first ‘X’ odd natural numbers is X ^ 2. We can use this formula to find the sum of natural numbers between ‘L’ and ‘R’ as follows:
The number of odd numbers < L = floor(L / 2)
The number of odd numbers <= R = ceil(R / 2)
The sum of odd numbers that are < L = (floor(L / 2)) ^ 2
The sum of odd numbers that are <= R = (ceil(R / 2)) ^ 2
The sum of odd numbers between L and R = ceil(R / 2) ^ 2 - floor(L / 2) ^ 2
Using the above formula, we can now calculate the sum of all the odd numbers between L and R in constant time. Now we can iterate over both the endpoints - L and R and check if the sum of odd numbers between L and R equals N, then increment our answer. In the end, we will get the number of ways to get a sum equal to N.
Can we do any further optimization in this approach? Obviously Yes. We are smart. We can do it. Note that for a fix L, the sum of the odd numbers will always increase as R increases. It means that first for some values of R, the sum will be less than N, then it may become equal to N, and later it will become greater than N. This gives us the idea to use a Binary Search. We can binary search over the right endpoint R.
Implementation
#include<bits/stdc++.h>
#define ll long long
using namespace std;
/*
Function to find the
Number of ways to get a sum equal to N.
*/
void solve(int n){
int ans = 0;
// Iterating over the left endpoint
for (int i = 1; i <= n; i+=2){
int l = i;
int r = n;
// Doing Binary Search over right endpoint
while (l <= r){
int mid = (l+r)/2;
// Number of odd numbers < L
int cnt_L = i/2;
// Number of odd numbers <= R
int cnt_R = (mid+1)/2;
// Calculating sum of odd nos from L to R.
int cur_sum = cnt_R*cnt_R - cnt_L*cnt_L;
// If sum = n, increment answer and break.
if (cur_sum == n){
ans++;
break;
}
// If sum < n, increment the left pointer
else if (cur_sum < n){
l = mid+1;
}
// If sum > n, decrement the right pointer
else{
r = mid-1;
}
}
}
// Printing the number of ways to get a sum equal to N.
cout<<ans<<endl;
}
// Driver Code
int main(){
int n = 63;
solve(n);
}
Output
3
Time Complexity: O(NlogN)
In the above approach, we are iterating over the left endpoint L from 1 to N and for each L, we are performing a binary search over the right endpoint to find the number of ways to get a sum equal to N. Since we are calculating sum in constant time, each binary search will take O(logN) time. Therefore, the total time complexity of the above approach is O(NlogN).
Space Complexity: O(1)
We are just using some extra variables in this approach therefore the space complexity is O(1).
Read More - Time Complexity of Sorting Algorithms