NCPC 2015 October 10, 2015 Problem D
NCPC 2015
Problem D
Disastrous Downtime
Problem ID: downtime
Claus Rebler, cc-by-sa
You’re investigating what happened when one of
your computer systems recently broke down. So far
you’ve concluded that the system was overloaded; it
looks like it couldn’t handle the hailstorm of incoming
requests. Since the incident, you have had ample oppor-
tunity to add more servers to your system, which would
make it capable of handling more concurrent requests.
However, you’ve simply been too lazy to do it—until
now. Indeed, you shall add all the necessary servers
...very soon!
To predict future requests to your system, you’ve reached out to the customers of your
service, asking them for details on how they will use it in the near future. The response has been
pretty impressive; your customers have sent you a list of the exact timestamp of every request
they will ever make!
You have produced a list of all the n upcoming requests specified in milliseconds. Whenever
a request comes in, it will immediately be sent to one of your servers. A request will take exactly
1000 milliseconds to process, and it must be processed right away.
Each server can work on at most k requests simultaneously. Given this limitation, can you
calculate the minimum number of servers needed to prevent another system breakdown?
Input
The first line contains two integers 1 ≤ n ≤ 100 000 and 1 ≤ k ≤ 100 000 , the number of
upcoming requests and the maximum number of requests per second that each server can handle.
Then follow n lines with one integer 0 ≤ t i ≤ 100 000 each, specifying that the i th request
will happen t i milliseconds from the exact moment you notified your customers. The timestamps
are sorted in chronological order. It is possible that several requests come in at the same time.
Output
Output a single integer on a single line: the minimum number of servers required to process all
the incoming requests, without another system breakdown.
Sample Input 1 Sample Output 1
2 1
0
1000
1
Sample Input 2 Sample Output 2
3 2
1000
1010
1999
2
NCPC 2015 Problem D: Disastrous Downtime
题意:n条请求,一个机器每秒内能处理k个,每个请求需要处理一秒。接下来是n个请求的时间,问至少要多少个机器才能保证不会有请求没被处理。
题解:我的想法是两个指针ij,i小j大,对于每个i来说找最小的j使得data[j]-data[i]>1000,则j-i为这一秒内需要处理的请求数,求出其中的最大值。一个机器能处理k个,算出需要多少个机器就行了。
还用一种做法是用数组存,遍历一次,也是相当于两个指针。
我的代码:
#include <iostream> #include <cstdio> #include <algorithm> #include <cstring> using namespace std; int data[100005]; int main() { int n,k,t; while(scanf("%d%d",&n,&k)!=EOF) { int sum=0; memset(data,0,sizeof(data)); for(int i=0;i<n;i++) scanf("%d",&data[i]); sort(data,data+n); int i,j; for(i=0,j=0;i<n&&j<n;) { while(((data[j]-data[i])<1000)&&j<n) j++; if(j==n) break; sum=max(sum,j-i); i++; } sum=max(sum,j-i); //cout<<i<<j<<endl; //cout<<sum<<" *****"<<endl; int ans=sum/k; sum%=k; if(sum%k) ans++; printf("%d\n",ans); } return 0; }
另一种做法:
#include <iostream> #include <cstdio> #include <cstring> #include <algorithm> #include <vector> #include <cmath> using namespace std; int data[101005]; int main() { int n,k,s; scanf("%d%d",&n,&k); memset(data,0,sizeof(data)); for(int i=0;i<n;i++) { scanf("%d",&s); data[s]++; data[s+1000]--; } int sum=0,ans=0; for(int i=0;i<101005;i++) { sum+=data[i]; ans = max(sum,ans); } if(ans%k==0) printf("%d\n",ans/k); else printf("%d\n",ans/k+1); }