-
-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathwebpamanager.py
1398 lines (1243 loc) · 85 KB
/
webpamanager.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
"""Run a version of the WebPA method to calculate adjusted assignment scores based on students' ratings of self and peer
contributions to group work. Works either by generating one-question spreadsheet templates to be completed and submitted
to a separate assignment by students; or, by creating per-group quizzes to capture ratings. This script was inspired by
an offline version of the WebPA scoring system that was originally developed in R by Natalia Obukhova, Chat
Wacharamanotham and Alexander Eiselmayer.
Example usage:
1) Initialise groups on Canvas using "Create Group Set". Select "Allow self sign-up" and auto-create N groups. Ask
students to move themselves into groups. Make sure to disable self sign-up once group membership has been finalised!
2) Create a group assignment, and select "Assign grades to each student individually" in its settings
3) Run this script in `--setup` mode to create (a) group member contribution forms or (b) quizzes, then:
(a) Distribute these forms to group members. To streamline this process, use the `conversationcreator` script
(though beware of filling up your personal 50MB limited `conversation attachments` folder); or, the
`feedbackuploader` script (note that grades need to be posted before students can see comments, but posting
only graded submissions makes *all* comments visible, which is sufficient). Create a separate individual
assignment to gather the resulting contribution forms.
-- or --
(b) In quiz mode there is nothing else to do, but it may be useful to remind students about this task - see the
`conversationcreator` script in `--groups` mode. Note that this script creates a unique quiz per group, rather
than a single overall quiz, which allows quizzes to be customised to identify individual group members to avoid
errors (such as misidentifying the person being rated). When setting up quizzes it can be worth waiting until
relatively close to the desired quiz start time, so that any group membership changes are taken into account.
If you do find that quiz group membership is outdated, see this script's `--setup-quiz-delete-existing` option
to quickly remove these old assignments.
4) Mark the group assignment as normal
5) Use this script to calculate adjusted grades. In spreadsheet mode, use the `submissiondownloader` script to retrieve
contribution forms. In quiz mode, submissions are retrieved automatically.
6) Use the feedbackuploader script to add the scaled marks from this script's output (webpa-final-marks.xlsx) to the
original assignment"""
__author__ = 'Simon Robinson'
__copyright__ = 'Copyright (c) 2024 Simon Robinson'
__license__ = 'Apache 2.0'
__version__ = '2024-05-13' # ISO 8601 (YYYY-MM-DD)
import argparse
import contextlib
import datetime
import json
import math
import os
import random
import re
import sys
import uuid
# noinspection PyPackageRequirements
import numpy # NumPy is a Pandas dependency, so guaranteed to be present because we require Pandas (below)
import openpyxl.styles.differential
import openpyxl.utils
# noinspection PyPackageRequirements
import pandas # we don't list Pandas in requirements.txt to skip installing for other scripts (which do not require it)
import requests
import requests.structures
from canvashelpers import Args, Utils, Config
TIMESTAMP_FORMAT = '%Y-%m-%dT%H:%M:%SZ' # e.g., '2024-12-31T13:30:00'
WEBPA_HEADERS = ['Respondent', 'Person', 'Student №', 'Rating', 'Comments (optional)', 'Group №']
WEBPA_QUIZ_GROUP = 'Group contribution (WebPA)'
def get_args():
parser = argparse.ArgumentParser()
parser_example_date = datetime.datetime.now(datetime.UTC).strftime(TIMESTAMP_FORMAT).rstrip('Z')
parser.add_argument('group', nargs=1,
help='Please provide the URL of the groups page that shows the group set you wish to use for '
'the WebPA exercise (e.g., https://canvas.instructure.com/courses/[course-id]/groups#tab-'
'[set-id]). Note that Canvas does not always update the URL when switching group tabs, '
'so it is worth opening the desired group in a new window to be sure')
parser.add_argument('--working-directory', default=None,
help='The location to use for processing and output. The script will work in a subfolder of '
'this directory that is named as the Canvas group set ID (see `group` parameter). When '
'`--setup` mode is activated and set to `spreadsheet` the given subfolder will be created '
'by the script (it should not already exist). When `--setup` is set to `quiz` or '
'`newquiz`, this parameter is only required if `--setup-quiz-export-links` is set. When '
'`--setup` is not specified, the use of spreadsheets is assumed, and this subfolder '
'should contain the individual student responses to the WebPA exercise, named as [student '
'number].xlsx (missing files will be treated as non-respondents). Note: see `--quiz-group'
'-name` for processing quiz responses. Default: the same directory as this script')
parser.add_argument('--setup', default=None,
help='When this parameter is set to `quiz` or `newquiz`, the script will create Canvas quizzes '
'(Classic or New, respectively) to be completed by group members to rate their peers\' '
'contributions. If set to `spreadsheet`, the script will generate empty WebPA forms to be '
'distributed to group members (via, e.g., the `conversationcreator` script). If this '
'parameter is not set, the script will look for group members\' responses (searching in '
'`--working-directory`; or, alternatively, if `--quiz-group-name` is set, the named '
'Canvas assignment group). With the latter, the script will automatically detect whether '
'the type of quizzes in use. Please note, however, that Classic and New quizzes should '
'not be mixed within the same assignment group. If you use both types, create different '
'assignment groups to separate them')
parser.add_argument('--quiz-group-name', default=None,
help='When `--setup` mode is not specified, setting this parameter causes the script to look '
'for a Canvas assignment group with this name to load individual quizzes and responses '
'from. When `--setup` mode is activated and set to `quiz` or `newquiz`, this is the name '
'of the assignment group to place the generated quizzes within. If this parameter is not '
'set, the default name is `%s [current date/time]. In either case, if the assignment '
'group does not exist, it will be created. Note that *all* existing quizzes in the '
'assignment group will be assumed to be part of the current WebPA process, so if the '
'process is to be run multiple times (i.e., for different assignments), different group '
'names should be used. This parameter has no effect when `--setup` mode is activated and '
'set to `spreadsheet`' % WEBPA_QUIZ_GROUP)
group_quiz = parser.add_argument_group(title='Quiz setup (see `canvashelpers.config` for additional '
'configuration). The following options only apply when `--setup` '
'mode is activated and set to `quiz` or `newquiz`')
group_quiz.add_argument('--setup-quiz-available-from', default=None,
help='The date/time from which the WebPA quiz should be made available to respondents. '
'This value should be specified as a timezone string - for example: %s. If not set, '
'the quiz is made available immediately' % parser_example_date)
group_quiz.add_argument('--setup-quiz-due-at', default=None,
help='The date/time at which WebPA quiz responses should be due. This value should be '
'specified as a timezone string - for example: %s. If not set, the quiz has no due '
'date' % parser_example_date)
group_quiz.add_argument('--setup-quiz-export-links', action='store_true',
help='If set, the links to each contribution quiz will be exported to a spreadsheet named'
'[`--quiz-group-name` value].xlsx in `--working-directory` (useful for example if '
'messaging groups separately outside of this scriptz\'s operation)')
group_spreadsheet = parser.add_argument_group(title='Spreadsheet setup. The following options only apply when '
'`--setup` mode is activated and set to `spreadsheet`')
group_spreadsheet.add_argument('--setup-spreadsheet-template', default=None,
help='An Excel template file to be used to create group members\' rating forms. '
'Useful if you would like to add instructions or other content to the forms '
'each group member completes. The template should already contain the '
'response column headers %s as its last row. If this parameter is not set, a '
'new spreadsheet will be created with these column headers.' % WEBPA_HEADERS)
group_spreadsheet.add_argument('--setup-spreadsheet-group-output', action='store_true',
help='Whether to generate a customised WebPA response form for each student '
'number in the group (default); or, if set, one generic spreadsheet per group')
group_processing = parser.add_argument_group(title='Processing. The following options only apply when `--setup` '
'mode is not active')
group_processing.add_argument('--marks-file', required='--setup' not in ''.join(sys.argv),
help='An XLSX or CSV file containing a minimum of two columns: student number (or '
'group name) and original (unscaled) mark, in that order')
group_processing.add_argument('--minimum-variance', type=float, default=0.2,
help='The minimum WebPA variance level at which contribution ratings will be used to '
'adjust marks. Default: 0.2')
group_processing.add_argument('--mark-rounding', type=float, default=0.5,
help='A fractional value to be used for rounding marks. For example, 5 rounds to the '
'nearest 5 marks. Must be greater than 0. Default: 0.5')
group_processing.add_argument('--maximum-mark', type=float, default=100,
help='The maximum possible mark for the assignment that this exercise is being '
'applied to, used to cap adjusted marks. Default: 100')
group_processing.add_argument('--context-summaries', action='store_true',
help='If set, the script will add two columns to the results spreadsheet: `Errors` '
'summarises processing issues when forms were submitted but found to be '
'invalid, and `Comment` provides a ready-made summary of the submission that '
'can be provided to each submitting student')
group_test = parser.add_argument_group(title='Testing, checking and review')
group_test.add_argument('--dry-run', action='store_true',
help='Preview the script\'s Canvas actions without actually making any changes. Only '
'applicable when creating quizzes; ignored for local file analysis/creation')
group_test.add_argument('--setup-spreadsheet-test', action='store_true',
help='Tells the script to insert random responses into the generated WebPA forms '
'(useful for testing). Only applicable when `--setup` mode is activated and set to '
'`spreadsheet`')
group_test.add_argument('--setup-quiz-delete-existing', action='store_true',
help='Delete any existing quizzes in the `--quiz-group-name` assignment group. Only '
'applicable when `--setup` mode is activated and set to `quiz`. If this parameter is '
'set, only this operation will be performed; quizzes will not be created, and other'
'configuration options except for `--dry-run` will be ignored')
return parser.parse_args()
class GroupResponseProcessor:
@staticmethod
def setup_spreadsheets(groups):
if TEMPLATE_FILE:
response_template_workbook = openpyxl.load_workbook(TEMPLATE_FILE)
response_template_sheet = response_template_workbook[response_template_workbook.sheetnames[0]]
else:
response_template_workbook = openpyxl.Workbook()
response_template_sheet = response_template_workbook.active
response_template_sheet.title = 'WebPA response form'
response_template_sheet.append(WEBPA_HEADERS)
initial_max_rows = response_template_sheet.max_row
# noinspection SpellCheckingInspection
thin_border = openpyxl.styles.borders.Border(
left=openpyxl.styles.borders.Side(border_style=openpyxl.styles.borders.BORDER_THIN, color='00AAAAAA'),
right=openpyxl.styles.borders.Side(border_style=openpyxl.styles.borders.BORDER_THIN, color='00AAAAAA'),
top=openpyxl.styles.borders.Side(border_style=openpyxl.styles.borders.BORDER_THIN, color='00AAAAAA'),
bottom=openpyxl.styles.borders.Side(border_style=openpyxl.styles.borders.BORDER_THIN, color='00AAAAAA')
)
output_count = 0
for group_key in sorted(groups):
for group_member in groups[group_key]:
response_template_sheet.append(
[None, group_member['student_name'], group_member['student_number'], None, None, group_key])
if TEMPLATE_FILE: # highlight the part of the template that needs to be completed
for response_row in response_template_sheet.iter_rows(min_row=initial_max_rows):
for cell in [response_row[0], response_row[1], response_row[2], response_row[5]]:
cell.fill = openpyxl.styles.PatternFill(start_color='00E7E6E6', end_color='00E7E6E6',
fill_type='solid')
for cell in response_row:
cell.border = thin_border
if args.setup_spreadsheet_group_output:
# just a generic form for the whole group to complete (and select their own row manually)
response_template_workbook.save(os.path.join(WORKING_DIRECTORY, 'group-%d.xlsx' % group_key))
output_count += 1
else:
# create a personalised form for each group member (with their own row pre-selected)
for group_member in groups[group_key]:
for response_row in response_template_sheet.iter_rows(min_row=initial_max_rows + 1, max_col=4):
response_row[0].value = None
if response_row[2].value == group_member['student_number']:
response_row[0].value = '✔'
response_row[0].alignment = openpyxl.styles.Alignment(horizontal='center')
if args.setup_spreadsheet_test:
response_row[3].value = random.randint(1, 5)
print('WARNING: TEST MODE is active; generating sample response data:',
response_row[3].value)
response_template_workbook.save(
os.path.join(WORKING_DIRECTORY, '%s.xlsx' % group_member['student_number']))
output_count += 1
# reset for next group
response_template_sheet.delete_rows(initial_max_rows + 1,
response_template_sheet.max_row - initial_max_rows)
print('Successfully generated', output_count, 'WebPA forms to', WORKING_DIRECTORY)
@staticmethod
def setup_quizzes(groups, assignment_group_id):
# the quiz can be customised in the canvashelpers.config file
config_settings = Config.get_settings()
# for ease, we build the quiz link list every time, and just don't save it if not required
quiz_link_workbook = openpyxl.Workbook()
quiz_link_workbook_sheet = quiz_link_workbook.active
quiz_link_workbook_sheet.title = 'WebPA quiz links'
quiz_link_workbook_sheet.freeze_panes = 'A2' # set the first row as a header
quiz_link_workbook_sheet.append(['Group name', 'Quiz link'])
output_count = 0
for group_key in sorted(groups):
# each group has a separate quiz that is only accessible to that group's members
print('\nCreating WebPA quiz for student group', group_key, '(%s members)' % len(groups[group_key]))
quiz_configuration = {
'quiz[title]': '%s [%s]' % (config_settings['webpa_quiz_title'], groups[group_key][0]['group_name']),
'quiz[description]': config_settings['webpa_quiz_description'],
'quiz[quiz_type]': 'graded_survey',
'quiz[assignment_group_id]': assignment_group_id,
'quiz[show_correct_answers]': 'false', # note: must be a string not a boolean
'quiz[only_visible_to_overrides]': True
}
if args.setup_quiz_available_from:
quiz_configuration['quiz[unlock_at]'] = args.setup_quiz_available_from
if args.setup_quiz_due_at:
quiz_configuration['quiz[due_at]'] = args.setup_quiz_due_at
quiz_configuration['quiz[lock_at]'] = args.setup_quiz_due_at
if args.dry_run:
print('\tDRY RUN: skipping creation of new quiz:', quiz_configuration['quiz[title]'])
current_quiz_id = -1
current_quiz_assignment_id = -1
else:
quiz_creation_response = requests.post('%s/quizzes' % COURSE_URL,
data=quiz_configuration, headers=Utils.canvas_api_headers())
if quiz_creation_response.status_code != 200:
print('\tERROR: unable to create quiz for group', group_key, ':', quiz_creation_response.text,
'- aborting')
sys.exit()
quiz_creation_response_json = quiz_creation_response.json()
current_quiz_id = quiz_creation_response_json['id']
current_quiz_assignment_id = quiz_creation_response_json['assignment_id']
print('\tCreated quiz', quiz_configuration['quiz[title]'], '- ID', current_quiz_id,
'with assignment ID', current_quiz_assignment_id)
# each member has a separate contribution question
question_position = 0
for member_id, group_member in enumerate(groups[group_key]):
student_description = '%s (%s)' % (group_member['student_name'], group_member['student_number'])
question_position = member_id + 1 # uses 1-based indexing
quiz_question_configuration = {
'question[question_name]': group_member['student_number'],
'question[question_text]': config_settings['webpa_rating_question_description'].replace(
'{group member}', student_description),
'question[question_type]': 'numerical_question',
'question[position]': question_position,
'question[answers][0][answer_range_start]': '1', # note incorrect API documentation (ditto below)
'question[answers][0][answer_range_end]': '5',
'question[answers][0][numerical_answer_type]': 'range_answer'
}
if args.dry_run:
print('\tDRY RUN: skipping creation of new quiz question:',
quiz_question_configuration['question[question_name]'])
else:
quiz_question_response = requests.post('%s/quizzes/%s/questions' % (COURSE_URL, current_quiz_id),
data=quiz_question_configuration,
headers=Utils.canvas_api_headers())
if quiz_question_response.status_code != 200:
print('\tERROR: unable to create question',
quiz_question_configuration['question[question_name]'],
'for quiz:', quiz_question_response.text)
sys.exit()
print('\tCreated quiz question:', quiz_question_configuration['question[question_name]'])
# at the end we ask for any general comments - in most cases these are not used, but students often like to
# be able to provide this (note: if left empty Canvas warns the student, hence the prompt to enter "None")
quiz_question_configuration = {
'question[question_name]': 'Comments (optional)', # not currently customisable as Canvas hides this
'question[question_text]': config_settings['webpa_comment_question_description'],
'question[question_type]': 'short_answer_question',
'question[position]': question_position + 1
}
if args.dry_run:
print('\tDRY RUN: skipping creation of general comments quiz question:',
quiz_question_configuration['question[question_name]'])
else:
quiz_question_response = requests.post('%s/quizzes/%s/questions' % (COURSE_URL, current_quiz_id),
data=quiz_question_configuration,
headers=Utils.canvas_api_headers())
if quiz_question_response.status_code != 200:
print('\tERROR: unable to create general comments question',
quiz_question_configuration['question[question_name]'], 'for quiz:',
quiz_question_response.text)
sys.exit()
print('\tCreated general comments quiz question:',
quiz_question_configuration['question[question_name]'])
# an update push is required after editing questions
quiz_configuration['quiz[notify_of_update]'] = 'false'
quiz_configuration['quiz[published]'] = True
if args.dry_run:
print('\tDRY RUN: skipping update push for quiz', quiz_configuration['quiz[title]'])
else:
quiz_update_response = requests.put('%s/quizzes/%s' % (COURSE_URL, current_quiz_id),
data=quiz_configuration, headers=Utils.canvas_api_headers())
if quiz_update_response.status_code != 200:
print('\tERROR: unable to update quiz', quiz_configuration['quiz[title]'], ':',
quiz_update_response.text, '- aborting')
sys.exit()
print('\tPushed update for quiz', quiz_configuration['quiz[title]'])
# hide the assignment in the gradebook via the assignments API
assignment_configuration = {
'assignment[omit_from_final_grade]': True,
'assignment[hide_in_gradebook]': True
}
if args.dry_run:
print('\tDRY RUN: skipping gradebook configuration for quiz', quiz_configuration['quiz[title]'])
else:
quiz_update_response = requests.put('%s/assignments/%s' % (COURSE_URL, current_quiz_assignment_id),
data=assignment_configuration, headers=Utils.canvas_api_headers())
if quiz_update_response.status_code != 200:
print('\tERROR: unable to update gradebook configuration for quiz',
quiz_configuration['quiz[title]'], ':', quiz_update_response.text, '- aborting')
sys.exit()
print('\tUpdated gradebook configuration for quiz', quiz_configuration['quiz[title]'])
# finally, configure access so that only this group's members can see and respond to this particular quiz
current_group_canvas_ids = [student['student_canvas_id'] for student in groups[group_key]]
GroupResponseProcessor.configure_quiz_access(current_quiz_assignment_id, current_group_canvas_ids)
if not args.dry_run:
quiz_link = '%s/quizzes/%s' % (COURSE_URL.replace('/api/v1', ''), current_quiz_id)
quiz_link_workbook_sheet.append([groups[group_key][0]['group_name'], quiz_link])
print('\tFinished configuring quiz at', quiz_link)
if args.setup_quiz_export_links:
pass
output_count += 1
if args.setup_quiz_export_links:
quiz_link_file = os.path.join(WORKING_DIRECTORY, '%s.xlsx' % args.quiz_group_name)
print('%s quiz links to' % ('DRY RUN: skipping saving' if args.dry_run else 'Saving'), quiz_link_file)
if not args.dry_run:
quiz_link_workbook.save(quiz_link_file)
print('Finished processing', output_count, 'groups')
return
@staticmethod
def setup_new_quizzes(groups, assignment_group_id):
# the quiz can be customised in the canvashelpers.config file
config_settings = Config.get_settings()
# for ease, we build the quiz link list every time, and just don't save it if not required
quiz_link_workbook = openpyxl.Workbook()
quiz_link_workbook_sheet = quiz_link_workbook.active
quiz_link_workbook_sheet.title = 'WebPA new quiz links'
quiz_link_workbook_sheet.freeze_panes = 'A2' # set the first row as a header
quiz_link_workbook_sheet.append(['Group name', 'New quiz link'])
output_count = 0
for group_key in sorted(groups):
# each group has a separate quiz that is only accessible to that group's members
print('\nCreating WebPA new quiz for student group', group_key, '(%s members)' % len(groups[group_key]))
quiz_configuration = {
'quiz[title]': '%s [%s]' % (config_settings['webpa_quiz_title'], groups[group_key][0]['group_name']),
'quiz[instructions]': config_settings['webpa_quiz_description'],
'quiz[assignment_group_id]': assignment_group_id,
'quiz[quiz_settings][result_view_settings][result_view_restricted]': True, # note documentation typo
'quiz[quiz_settings][result_view_settings][display_points_awarded]': 'false', # note: must be a string
'quiz[quiz_settings][result_view_settings][display_points_possible]': 'false',
'quiz[quiz_settings][result_view_settings][display_items]': 'false'
}
if args.setup_quiz_available_from:
quiz_configuration['quiz[unlock_at]'] = args.setup_quiz_available_from
if args.setup_quiz_due_at:
quiz_configuration['quiz[due_at]'] = args.setup_quiz_due_at
quiz_configuration['quiz[lock_at]'] = args.setup_quiz_due_at
if args.dry_run:
print('\tDRY RUN: skipping creation of new quiz:', quiz_configuration['quiz[title]'])
current_quiz_id = -1
else:
quiz_creation_response = requests.post('%s/quizzes' % GroupResponseProcessor.new_quiz_api(COURSE_URL),
data=quiz_configuration, headers=Utils.canvas_api_headers())
if quiz_creation_response.status_code != 200:
print('\tERROR: unable to create new quiz for group', group_key, ':', quiz_creation_response.text,
'- aborting')
sys.exit()
quiz_creation_response_json = quiz_creation_response.json()
current_quiz_id = quiz_creation_response_json['id']
print('\tCreated new quiz', quiz_configuration['quiz[title]'], '- ID', current_quiz_id)
# each member has a separate contribution question
question_position = 0
for member_id, group_member in enumerate(groups[group_key]):
student_description = '%s (%s)' % (group_member['student_name'], group_member['student_number'])
question_position = member_id + 1 # uses 1-based indexing
quiz_question_configuration = {
'item': {
'entry_type': 'Item',
'position': question_position,
'entry': {
'interaction_type_slug': 'choice',
'title': group_member['student_number'],
'item_body': config_settings['webpa_rating_question_body'].replace('{group member}',
student_description),
}
}
}
answer_uuid = None
interaction_data = []
for i in range(5):
answer_rating = i + 1
answer_uuid = str(uuid.uuid4())
interaction_data.append({
'id': answer_uuid,
'position': answer_rating,
'itemBody': '<p><b>%d</b>: %s</p>' % ( # to guarantee starting with the correct rating number
answer_rating, config_settings['webpa_rating_question_choice_%d' % answer_rating])
})
quiz_question_configuration['item']['entry']['interaction_data'] = {'choices': interaction_data}
quiz_question_configuration['item']['entry']['scoring_algorithm'] = 'Equivalence'
quiz_question_configuration['item']['entry']['scoring_data'] = {'value': answer_uuid}
if args.dry_run:
print('\tDRY RUN: skipping creation of new quiz question:',
quiz_question_configuration['item']['entry']['title'])
else:
quiz_question_response = requests.post(
'%s/quizzes/%s/items' % (GroupResponseProcessor.new_quiz_api(COURSE_URL), current_quiz_id),
json=quiz_question_configuration, headers=Utils.canvas_api_headers())
if quiz_question_response.status_code != 200:
print('\tERROR: unable to create question',
quiz_question_configuration['item']['entry']['title'], 'for quiz:',
quiz_question_response.text)
sys.exit()
print('\tCreated new quiz question:', quiz_question_configuration['item']['entry']['title'])
# at the end we ask for any general comments - in most cases these are not used, but students often like to
# be able to provide this (note: if left empty Canvas warns the student, hence the prompt to enter "None")
quiz_question_configuration = {
'item[entry_type]': 'Item',
'item[position]': question_position + 1,
'item[entry][interaction_type_slug]': 'essay',
'item[entry][title]': 'Comments (optional)', # not currently customisable as Canvas hides this
'item[entry][item_body]': config_settings['webpa_comment_question_description'],
'item[entry][interaction_data][rce]': 'false', # note: must be a string not a boolean
'item[entry][scoring_algorithm]': 'None',
'item[entry][scoring_data][value]': ''
}
if args.dry_run:
print('\tDRY RUN: skipping creation of general comments new quiz question:',
quiz_question_configuration['item[entry][title]'])
else:
quiz_question_response = requests.post(
'%s/quizzes/%s/items' % (GroupResponseProcessor.new_quiz_api(COURSE_URL), current_quiz_id),
data=quiz_question_configuration,
headers=Utils.canvas_api_headers())
if quiz_question_response.status_code != 200:
print('\tERROR: unable to create general comments question',
quiz_question_configuration['item[entry][title]'], 'for quiz:',
quiz_question_response.text)
sys.exit()
print('\tCreated general comments new quiz question:',
quiz_question_configuration['item[entry][title]'])
# publish via the assignments (rather than New Quizzes) API
assignment_configuration = {
'assignment[published]': True,
'assignment[only_visible_to_overrides]': True,
'assignment[omit_from_final_grade]': True,
'assignment[hide_in_gradebook]': True
}
if args.dry_run:
print('\tDRY RUN: skipping update push for new quiz', quiz_configuration['quiz[title]'])
else:
quiz_update_response = requests.put('%s/assignments/%s' % (COURSE_URL, current_quiz_id),
data=assignment_configuration, headers=Utils.canvas_api_headers())
if quiz_update_response.status_code != 200:
print('\tERROR: unable to update new quiz', quiz_configuration['quiz[title]'], ':',
quiz_update_response.text, '- aborting')
sys.exit()
print('\tPushed update for new quiz', quiz_configuration['quiz[title]'])
# finally, configure access so that only this group's members can see and respond to this particular quiz
current_group_canvas_ids = [student['student_canvas_id'] for student in groups[group_key]]
GroupResponseProcessor.configure_quiz_access(current_quiz_id, current_group_canvas_ids)
if not args.dry_run:
quiz_link = '%s/assignments/%s' % (COURSE_URL.replace('/api/v1', ''), current_quiz_id)
quiz_link_workbook_sheet.append([groups[group_key][0]['group_name'], quiz_link])
print('\tFinished configuring new quiz at', quiz_link)
if args.setup_quiz_export_links:
pass
output_count += 1
if args.setup_quiz_export_links:
quiz_link_file = os.path.join(WORKING_DIRECTORY, '%s.xlsx' % args.quiz_group_name)
print('%s new quiz links to' % ('DRY RUN: skipping saving' if args.dry_run else 'Saving'), quiz_link_file)
if not args.dry_run:
quiz_link_workbook.save(quiz_link_file)
print('Finished processing', output_count, 'groups')
return
@staticmethod
def new_quiz_api(original_api_url):
return original_api_url.replace('/api/v1/', '/api/quiz/v1/') # hosted in a different location, bizarrely
@staticmethod
def configure_quiz_access(current_quiz_id, current_group_canvas_ids):
access_override_configuration = {'assignment_override[student_ids][]': current_group_canvas_ids}
if args.setup_quiz_available_from:
access_override_configuration['assignment_override[unlock_at]'] = args.setup_quiz_available_from
if args.setup_quiz_due_at:
access_override_configuration['assignment_override[due_at]'] = args.setup_quiz_due_at
access_override_configuration['assignment_override[lock_at]'] = args.setup_quiz_due_at
if args.dry_run:
print('\tDRY RUN: skipping quiz assignment access configuration for Canvas users:',
current_group_canvas_ids, 'available from', args.setup_quiz_available_from, 'and due at',
args.setup_quiz_due_at)
else:
access_override_response = requests.post(
'%s/assignments/%s/overrides' % (COURSE_URL, current_quiz_id),
data=access_override_configuration, headers=Utils.canvas_api_headers())
if access_override_response.status_code != 201: # note 201 Created not 200 OK
print('\tERROR: unable to configure quiz assignment access for Canvas users', current_group_canvas_ids,
':', access_override_response.text, '- aborting')
print('\tConfigured quiz assignment access for Canvas users', current_group_canvas_ids, 'available from',
args.setup_quiz_available_from, 'and due at', args.setup_quiz_due_at)
@staticmethod
def get_assignment_group_id(group_name):
assignment_group_response = Utils.canvas_multi_page_request('%s/assignment_groups' % COURSE_URL,
type_hint='assignment groups')
if not assignment_group_response:
return None
assignment_group_response_json = json.loads(assignment_group_response)
for group_properties in assignment_group_response_json:
if group_properties['name'] == group_name:
return group_properties['id']
@staticmethod
def create_assignment_group(new_group_name):
group_creation_response = requests.post('%s/assignment_groups' % COURSE_URL,
data={'name': new_group_name},
headers=Utils.canvas_api_headers())
if group_creation_response.status_code != 200:
print('\tERROR: unable to create assignment group; aborting')
sys.exit()
return group_creation_response.json()['id']
@staticmethod
def get_spreadsheets(groups, expected_submissions, summary_sheet):
response_files = [f for f in os.listdir(WORKING_DIRECTORY) if re.match(r'\d+\.xlsx?', f)]
errors = {}
skipped_files = []
for file in response_files:
invalid_file = False
response_workbook = openpyxl.load_workbook(os.path.join(WORKING_DIRECTORY, file))
response_sheet = response_workbook[response_workbook.sheetnames[0]]
# response_sheet.column_dimensions['C'].number_format = '@' # force column format to text - doesn't work
found_header_row = False
valid_members = []
expected_rater = file.split('.')[0]
if expected_rater not in expected_submissions:
print('WARNING: skipping unexpected form', file)
skipped_files.append(file)
continue
current_group = None
current_rater = None
current_responses = []
current_errors = []
current_total = 0
found_members = []
for response_row in response_sheet.iter_rows(max_col=6):
cells = [c.value for c in response_row]
if cells == WEBPA_HEADERS:
found_header_row = True
continue
if all(v is None for v in cells):
continue # sometimes openpyxl produces hundreds of empty rows at the end of a table - ignore
if found_header_row:
if not cells[2]:
continue # sometimes xlsx files contain empty rows after content - ignore
cells[2] = str(cells[2]).split('.')[0] # make sure student number is treated as a string
found_members.append(cells[2]) # so we can check that all expected members are present
if not current_group:
current_group = cells[5]
valid_members = [g['student_number'] for g in groups[current_group]]
# validate the submitted data against Canvas group membership
ignored_rating = False
if cells[0]: # note that we accept any content, not just the '✔' we ask for
if not current_rater and cells[2] == expected_rater:
current_rater = cells[2]
else:
current_errors.extend(
[e for e in ['Incorrect or multiple respondents selected'] if e not in current_errors])
invalid_file = True
if cells[2] not in valid_members:
ignored_rating = True # not necessarily invalid - see membership checks below
if cells[5] != current_group:
current_errors.append('Invalid group number (%s)' % cells[5])
invalid_file = True
if cells[3] is None or type(cells[3]) not in [int, float]:
current_errors.append(
'%s rating %s' % ('Own' if cells[2] == current_rater else 'Member %s' % cells[2],
'invalid (\'%s\')' % cells[3] if cells[3] else 'missing'))
invalid_file = True
if not (invalid_file or ignored_rating):
bounded_score = round(max(min(cells[3], 5), 1)) # don't allow scores outside 1-5 (int) range
if bounded_score != cells[3]:
current_errors.append('Rating %s for %s is outside of range 1-5 (rounded to %d)' % (
cells[3], cells[2], bounded_score))
current_responses.append([None, cells[2], bounded_score, None, current_group])
current_total += bounded_score
if current_group:
sorted_found = sorted(found_members)
sorted_expected = sorted(valid_members)
if sorted_found != sorted_expected:
members_missing = set(sorted_expected) - set(sorted_found)
if members_missing:
current_errors.append('Group member(s) missing: %s' % ', '.join(members_missing))
invalid_file = True
members_added = set(sorted_found) - set(sorted_expected)
if members_added: # note: this can have legitimate explanations - e.g., group members withdrawing
current_errors.append('Non-group member(s) found: %s – ignoring' % ', '.join(members_added))
if not current_rater:
if not found_header_row:
current_errors.append('Incorrect (or edited example) rating form has been used')
else:
current_errors.append('Own name indicator missing')
invalid_file = True
if current_errors:
errors[expected_rater] = current_errors
if not invalid_file:
if current_errors:
print('WARNING: form data required corrections', file, '-', current_errors)
for response in current_responses:
response[0] = current_rater
response[3] = response[2] / current_total
summary_sheet.append(response)
else:
print('ERROR: skipping invalid form', file, '-', current_errors)
skipped_files.append(file)
respondents = [f.split('.')[0] for f in response_files if f not in skipped_files] # without invalid files
invalid = [f.split('.')[0] for f in skipped_files]
return respondents, invalid, errors
@staticmethod
def get_quizzes(groups, expected_submissions, summary_sheet, quiz_group_name):
respondents = []
invalid = []
errors = {}
# first get all quizzes within the given assignment group
assignment_group_id = GroupResponseProcessor.get_assignment_group_id(quiz_group_name)
if not assignment_group_id:
print('ERROR: unable to find quiz group name', quiz_group_name, '- aborting')
sys.exit()
assignment_list_response = Utils.canvas_multi_page_request(
'%s/assignment_groups/%s/assignments' % (COURSE_URL, assignment_group_id), type_hint='assignment list')
if not assignment_list_response:
print('\tERROR: unable to get assignment list response; aborting')
sys.exit()
assignment_list_response_json = json.loads(assignment_list_response)
for quiz in assignment_list_response_json:
if 'quiz_id' not in quiz:
# avoid having to specify quiz type for analysis by detecting the type of the first submission
print('WARNING: found new quiz assignment', quiz['id'], '- switching to new quizzes mode')
return GroupResponseProcessor.get_new_quizzes(groups, expected_submissions, summary_sheet,
assignment_list_response_json)
quiz_id = quiz['quiz_id']
print('\nFound quiz ID', quiz_id, '-', quiz['name'], 'with assignment ID', quiz['id'], 'due at',
quiz['due_at'])
current_group = int(quiz['name'].split('[')[-1].rstrip(']').split(' ')[-1])
valid_members = [g['student_number'] for g in groups[current_group]]
print('\tIdentified group', current_group, 'with expected members', valid_members)
# then all quiz questions
question_student_map = {}
quiz_question_response = requests.get('%s/quizzes/%s/questions' % (COURSE_URL, quiz_id),
headers=Utils.canvas_api_headers())
if quiz_question_response.status_code != 200:
print('\tERROR: unable to get quiz questions for quiz', quiz_id, '- aborting:',
quiz_question_response.text)
sys.exit()
quiz_question_response_json = quiz_question_response.json()
print('\tFound', end=' ')
for question in quiz_question_response_json:
question_id = question['id']
question_name = question['question_name']
rating_question = False
if question_name.isdigit():
rating_question = True
question_student_map[question_id] = question_name
print('%s question' % ('rating' if rating_question else 'comments'), question_id,
'titled:', question_name, end='; ')
print()
# then all submissions for that quiz
quiz_submission_response = requests.get('%s/quizzes/%s/submissions' % (COURSE_URL, quiz_id),
headers=Utils.canvas_api_headers())
if quiz_submission_response.status_code != 200:
print('\tERROR: unable to get quiz submissions for quiz', quiz_id, '- aborting:',
quiz_submission_response.text)
sys.exit()
quiz_submission_response_json = quiz_submission_response.json()
current_quiz_submission = quiz_submission_response_json['quiz_submissions']
if len(current_quiz_submission) <= 0:
print('\tNo submissions found for quiz', quiz_id, '- skipping')
continue
for submission in current_quiz_submission:
print('\tLoading quiz', quiz_id, 'submission:', submission['id'])
# then a single submission's details
quiz_submission_individual_response = requests.get(
'%s/quizzes/%s/submissions/%s' % (COURSE_URL, quiz_id, submission['id']),
params={'include[]': ['submission', 'quiz', 'user', 'submission_history']},
headers=Utils.canvas_api_headers())
if quiz_submission_individual_response.status_code != 200:
print('\t\tERROR: unable to get individual quiz response', submission['id'], '- aborting:',
quiz_submission_individual_response.text)
sys.exit()
# an array is returned, but we expect (and asked for) only one result, so this is okay
submission_summary = quiz_submission_individual_response.json()['submissions'][0]
submission_from = submission_summary['user']
current_rater = submission_from['login_id']
current_rater_name = submission_from['name']
if current_rater not in expected_submissions:
print('\t\tWARNING: skipping unexpected form from student not in any group:', current_rater)
invalid.append(current_rater)
continue
if current_rater not in valid_members:
print('\t\tWARNING: skipping unexpected form from student not in current group:', current_rater)
invalid.append(current_rater)
continue
if submission_summary['workflow_state'] not in ['complete', 'graded', 'pending_review']:
print('\t\tWARNING: skipping empty or partly-complete form from', current_rater, '-',
submission_summary)
invalid.append(current_rater)
continue
# date is oddly sometimes missing even if previously set
due_date = quiz['due_at'] or submission_summary['cached_due_date']
if due_date:
if (datetime.datetime.strptime(submission_summary['submitted_at'], TIMESTAMP_FORMAT) >
datetime.datetime.strptime(due_date, TIMESTAMP_FORMAT)):
print('\t\tWARNING: skipping late rating submission from', current_rater, '- submitted at',
submission_summary['submitted_at'], 'but due at', due_date)
invalid.append(current_rater)
continue
print('\t\tFound submission from', current_rater_name, '- Canvas ID:', submission_from['id'],
'; student number:', current_rater)
current_responses = []
current_errors = []
current_total = 0
found_members = []
invalid_response = False
# we only allow one submission to the rating quiz, but just in case, take the first valid one found
submission_answers = None
for history_entry in submission_summary['submission_history']:
if history_entry['workflow_state'] in ['complete', 'graded'] and 'submission_data' in history_entry:
submission_answers = history_entry
if not submission_answers:
print('\t\tWARNING: skipping unanswered or partly-complete form from ', current_rater, '-',
submission_summary)
invalid.append(current_rater)
continue
submission_answers = submission_summary['submission_history'][0]['submission_data']
for answer in submission_answers:
answer_value = answer['text']
if answer['question_id'] in question_student_map:
rated_student = question_student_map[answer['question_id']]
found_members.append(rated_student)
print('\t\tRating from', current_rater_name, 'for', rated_student, ':', answer_value)
# validate the submitted data against Canvas group membership (remembering this may change)
if rated_student not in valid_members:
print('\t\tWARNING: Ignoring rating by', current_rater, 'of non member', rated_student)
continue
try:
original_score = float(answer_value)
bounded_score = round(max(min(original_score, 5), 1)) # only permit scores 1-5 (int)
if bounded_score != original_score:
current_errors.append('Rating %s for %s is outside of range 1-5 (rounded to %d)' % (
original_score, rated_student, bounded_score))
current_responses.append([current_rater, rated_student, bounded_score, None, current_group])
current_total += bounded_score
except ValueError:
current_errors.append('%s rating %s' % (
'Own' if rated_student == current_rater else 'Member %s' % rated_student,
'invalid (\'%s\')' % answer_value if answer_value else 'missing'))
invalid_response = True
elif answer_value and answer_value.lower().strip() != 'none':
print('\t\tWARNING: Comments from', current_rater_name, ':', answer_value.replace('\n', ' '))
# finally, check for errors and collate responses
# noinspection DuplicatedCode
if current_group:
sorted_found = sorted(found_members)
sorted_expected = sorted(valid_members)
if sorted_found != sorted_expected:
members_missing = set(sorted_expected) - set(sorted_found)
if members_missing:
current_errors.append('Group member(s) missing: %s' % ', '.join(members_missing))
invalid_response = True
members_added = set(sorted_found) - set(sorted_expected)
if members_added: # note: this can have legitimate explanations - e.g., members withdrawing
current_errors.append(
'Non-group member(s) found: %s – ignoring' % ', '.join(str(m) for m in members_added))
if current_errors:
errors[current_rater] = current_errors
if not invalid_response:
respondents.append(current_rater)
if current_errors:
print('\tWARNING: form data required corrections', current_rater, '-', current_errors)
for response in current_responses:
response[3] = response[2] / current_total
summary_sheet.append(response)
else:
print('\tERROR: skipping invalid form from', current_rater, '-', current_errors)
invalid.append(current_rater)
return respondents, invalid, errors
@staticmethod
def get_new_quizzes(groups, expected_submissions, summary_sheet, assignment_list_response_json):
# frustratingly, much of this code needs to be duplicated from the quiz exporter script due to the lack of a new
# quizzes response API (note also we need to use Canvas IDs far more because New Quizzes hide student numbers)
respondents = []
invalid = []
errors = {}
config_settings = Config.get_settings()
root_instructure_domain = 'https://%s.quiz-%s-dub-%s.instructure.com/api'
lti_environment_type = None # auto-detected based on first submission found
lti_institution_subdomain = None # auto-detected based on first submission found
lti_bearer_token = config_settings['new_quiz_lti_bearer_token']
bearer_token_error_message = ('See the configuration file instructions, and the assignment\'s SpeedGrader '
'page: %s/gradebook/speed_grader?assignment_id=%d') % (
assignment_list_response_json[0]['html_url'].split('/assignments')[0],
assignment_list_response_json[0]['id'])
if lti_bearer_token.startswith('*** your'):
print('WARNING: new_quiz_lti_bearer_token in', Config.FILE_PATH, 'seems to contain the example value.',
bearer_token_error_message)
html_regex = re.compile('<.*?>') # used to filter out HTML formatting from retrieved responses
for quiz in assignment_list_response_json:
quiz_id = quiz['id']
print('\nFound new quiz with assignment ID', quiz_id, 'due at', quiz['due_at'])
current_group = int(quiz['name'].split('[')[-1].rstrip(']').split(' ')[-1])
valid_members = [g['student_number'] for g in groups[current_group]]
print('\tIdentified group', current_group, 'with expected members', valid_members)
assignment_url = Utils.course_url_to_api(quiz['html_url'])
print('\tRequesting new quiz assignment submissions list from', assignment_url)
with open(os.devnull, 'w') as f, contextlib.redirect_stdout(f):
submission_list_response = Utils.get_assignment_submissions(assignment_url)
if not submission_list_response:
print('\tERROR: unable to retrieve new quiz assignment submission list')
sys.exit()
submission_list_json = json.loads(submission_list_response)
user_session_map = []
for submission_summary in submission_list_json:
if submission_summary['submission_type'] and 'external_tool_url' in submission_summary:
current_rater = submission_summary['user']['login_id']
current_rater_name = submission_summary['user']['name']
if current_rater not in expected_submissions:
print('\tWARNING: skipping unexpected new quiz from student not in any group:', current_rater)
invalid.append(current_rater)
continue
if current_rater not in valid_members:
print('\tWARNING: skipping unexpected new quiz from student not in current group:',
current_rater)
invalid.append(current_rater)
continue
if submission_summary['workflow_state'] not in ['complete', 'graded', 'pending_review']:
print('\tWARNING: skipping empty or partly-complete new quiz from', current_rater, '-',
submission_summary)
invalid.append(current_rater)
continue
# date is oddly sometimes missing even if previously set
due_date = quiz['due_at'] or submission_summary['cached_due_date']
if due_date:
if (datetime.datetime.strptime(submission_summary['submitted_at'], TIMESTAMP_FORMAT) >
datetime.datetime.strptime(due_date, TIMESTAMP_FORMAT)):
print('\tWARNING: skipping late new quiz submission from', current_rater,
'- submitted at',
submission_summary['submitted_at'], 'but due at', due_date)
invalid.append(current_rater)
continue
print('\tFound new quiz submission from', current_rater_name, '- Canvas ID:',
submission_summary['user_id'], '; student number:', current_rater)
tool_url = submission_summary['external_tool_url']
tool_url_parts = tool_url.split('.quiz-lti-dub-')
user_session_map.append({'student_number': current_rater,
'canvas_id': submission_summary['user_id'],
'session_id': tool_url.split('participant_session_id=')[1].split('&')[0]})
if not lti_institution_subdomain:
lti_institution_subdomain = tool_url_parts[0].split('//')[1]
if not lti_environment_type:
lti_environment_type = tool_url_parts[1].split('.instructure.com')[0]
if len(user_session_map) <= 0:
print('\tNo valid submissions found for new quiz', quiz_id, '- skipping')
continue
current_responses = []
current_errors = []
current_total = 0
found_members = []
invalid_response = False
lti_api_root = root_instructure_domain % (lti_institution_subdomain, 'lti', lti_environment_type)
quiz_api_root = root_instructure_domain % (lti_institution_subdomain, 'api', lti_environment_type)
token_headers = requests.structures.CaseInsensitiveDict()
token_headers['accept'] = 'application/json'
token_headers['authorization'] = ('%s' if 'Bearer ' in lti_bearer_token else 'Bearer %s') % lti_bearer_token
for session in user_session_map:
print('\t\tLoading new quiz session', session)
token_response = requests.get(
'%s/participant_sessions/%s/grade' % (lti_api_root, session['session_id']), headers=token_headers)
if token_response.status_code != 200:
print('\t\tERROR: unable to load new quiz session - did you set a valid new_quiz_lti_bearer_token',
'in %s?' % Config.FILE_PATH, bearer_token_error_message)