Skip to content

Commit

Permalink
add limiter for demo (open-mmlab#668)
Browse files Browse the repository at this point in the history
* add limiter

* change default and write docs

* polish]
  • Loading branch information
dreamerlin authored Mar 3, 2021
1 parent 59ad57f commit ea0e722
Show file tree
Hide file tree
Showing 2 changed files with 39 additions and 2 deletions.
6 changes: 5 additions & 1 deletion demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ We provide a demo script to implement real-time action recognition from web came
```shell
python demo/webcam_demo.py ${CONFIG_FILE} ${CHECKPOINT_FILE} ${LABEL_FILE} \
[--device ${DEVICE_TYPE}] [--camera-id ${CAMERA_ID}] [--threshold ${THRESHOLD}] \
[--average-size ${AVERAGE_SIZE}]
[--average-size ${AVERAGE_SIZE}] [--drawing-fps ${DRAWING_FPS}] [--inference-fps ${INFERENCE_FPS}]
```

Optional arguments:
Expand All @@ -226,6 +226,10 @@ Optional arguments:
- `CAMERA_ID`: ID of camera device If not specified, it will be set to 0.
- `THRESHOLD`: Threshold of prediction score for action recognition. Only label with score higher than the threshold will be shown. If not specified, it will be set to 0.
- `AVERAGE_SIZE`: Number of latest clips to be averaged for prediction. If not specified, it will be set to 1.
- `DRAWING_FPS`: Upper bound FPS value of the output drawing. If not specified, it will be set to 20.
- `INFERENCE_FPS`: Upper bound FPS value of the output drawing. If not specified, it will be set to 4.

**Note**: If your hardware is good enough, increasing the value of `DRAWING_FPS` and `INFERENCE_FPS` will get a better experience.

Examples:

Expand Down
35 changes: 34 additions & 1 deletion demo/webcam_demo.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import argparse
import time
from collections import deque
from operator import itemgetter
from threading import Thread
Expand Down Expand Up @@ -43,14 +44,28 @@ def parse_args():
type=int,
default=1,
help='number of latest clips to be averaged for prediction')
parser.add_argument(
'--drawing-fps',
type=int,
default=20,
help='Set upper bound FPS value of the output drawing')
parser.add_argument(
'--inference-fps',
type=int,
default=4,
help='Set upper bound FPS value of model inference')
args = parser.parse_args()
assert args.drawing_fps >= 0 and args.inference_fps >= 0, \
'upper bound FPS value of drawing and inference should be set as ' \
'positive number, or zero for no limit'
return args


def show_results():
print('Press "Esc", "q" or "Q" to exit')

text_info = {}
cur_time = time.time()
while True:
msg = 'Waiting for action ...'
ret, frame = camera.read()
Expand Down Expand Up @@ -84,10 +99,18 @@ def show_results():
if ch == 27 or ch == ord('q') or ch == ord('Q'):
break

if drawing_fps > 0:
# add a limiter for actual drawing fps <= drawing_fps
sleep_time = 1 / drawing_fps - (time.time() - cur_time)
if sleep_time > 0:
time.sleep(sleep_time)
cur_time = time.time()


def inference():
score_cache = deque()
scores_sum = 0
cur_time = time.time()
while True:
cur_windows = []

Expand Down Expand Up @@ -122,17 +145,27 @@ def inference():
result_queue.append(results)
scores_sum -= score_cache.popleft()

if inference_fps > 0:
# add a limiter for actual inference fps <= inference_fps
sleep_time = 1 / inference_fps - (time.time() - cur_time)
if sleep_time > 0:
time.sleep(sleep_time)
cur_time = time.time()

camera.release()
cv2.destroyAllWindows()


def main():
global frame_queue, camera, frame, results, threshold, sample_length, \
data, test_pipeline, model, device, average_size, label, result_queue
data, test_pipeline, model, device, average_size, label, \
result_queue, drawing_fps, inference_fps

args = parse_args()
average_size = args.average_size
threshold = args.threshold
drawing_fps = args.drawing_fps
inference_fps = args.inference_fps

device = torch.device(args.device)
model = init_recognizer(args.config, args.checkpoint, device=device)
Expand Down

0 comments on commit ea0e722

Please sign in to comment.