After I submitted the zipped json file, an error was reported during the execution, and the error was displayed as follows:
metrics_summary = nusc_eval.main()
File “python3.8/site-packages/nuscenes/eval/tracking/evaluate.py”, line 204, in main
metrics, metric_data_list = self.evaluate()
File " python3.8/site-packages/nuscenes/eval/tracking/evaluate.py", line 135, in evaluate
File " python3.8/site-packages/nuscenes/eval/tracking/evaluate.py", line 131, in accumulate_class
curr_md = curr_ev.accumulate()
File " python3.8/site-packages/nuscenes/eval/tracking/algo.py", line 140, in accumulate
acc, _ = self.accumulate_threshold(threshold)
File " python3.8/site-packages/nuscenes/eval/tracking/algo.py", line 292, in accumulate_threshold
acc_merged = MOTAccumulatorCustom.merge_event_dataframes(accs)
File " python3.8/site-packages/nuscenes/eval/tracking/mot.py", line 100, in merge_event_dataframes
df = df.events
File " python3.8/site-packages/nuscenes/eval/tracking/mot.py", line 61, in events
self.cached_events_df = MOTAccumulatorCustom.new_event_dataframe_with_data(self._indices, self._events)
File " python3.8/site-packages/nuscenes/eval/tracking/mot.py", line 37, in new_event_dataframe_with_data
df = pd.DataFrame(events, index=idx, columns=[‘Type’, ‘OId’, ‘HId’, ‘D’])
File " python3.8/site-packages/pandas/core/frame.py", line 746, in init
mgr = init_dict(data, index, columns, dtype=dtype)
File " python3.8/site-packages/pandas/core/dtypes/cast.py", line 1175, in construct_1d_arraylike_from_scalar
AttributeError: type object ‘object’ has no attribute ‘dtype’
Searching on the Internet, it is said that it is a pandas version problem, and the 0.25 version will report an error.
Old version: nan_dtype = object
New version: nan_dtype = np.dtype(“object”)
I use pandas 1.4.4 locally, and the eval is normal. After changing it to ‘nan_dtype = object’, the above error will indeed be reported.
It seems that upgrading the pandas version can solve this problem, or changing the above line of code, but the nuscenes test is submitted, and the environment is on the eval server, so we can’t modify it. I would like to ask if anyone who participated in the tracking challenges encountered this problem?
My team name is VPF.