This has been unused for a long time, and is getting in the way of reforms. Its position as a test rather than as a part of a framework was always going to cause oddities, particularly after we split the Copter tests into several chunks.
We had a pattern emerging of using the test name as the method name to contain the actual test. We also tended to duplicate the docstrings in the test description - or omit the docstring.
This uses reflection to retrieve both the test name and the description, meaning less duplication of this information and enforcing having docstrings on the test methods.
The stack trace is already extremely useful... but mapping back to the
relevant dataflash log is a bit of a pain.
Adding more stuff to this method in the future might be good, too.
This is important when running under Valgrind as the text can be
significantly delayed. wait_statustext times out in simulation time,
whereas mavproxy.expect has a flat 60s timeout
autotest: prevent method-on-undef problem in case of early test failure
autotest: plumb MAVProxy rc input back in for switch test
autotest: close off rc thread in case of timeout
Should fix this scenario:
- Plane soaring test uses different defaults file
- soaring test sets a parameter away from default value
- context pop sets it back to the value in the defaults file
- it is now in permanent storage with the default found in the
different defaults file used by the soaring test
- we finish running the soaring test an reset the SITL command line
back to default, with the old defaults file
- we go on to running the next test. At this point the value for that
parameter is the default in the defaults file used by the soaring test.
Something similar applies in the Callisto test.
# avoid a timeout race condition where ArduPilot re-requests a
# fence point before we receive and respond to the first one.
# Since ArduPilot has a 1s timeout on re-requesting, This only
# requires a round-trip delay of 1/speedup seconds to trigger
# - and that has been seen in practise on Travis
AT-0417.0: Sending item with seq=0
AT-0417.2: Got (MISSION_REQUEST {target_system : 243, target_component : 250, seq : 0, mission_type : 1})
AT-0417.2: Got (MISSION_REQUEST {target_system : 243, target_component : 250, seq : 0, mission_type : 1})
AT-0417.2: Exception caught: Traceback (most recent call last):
File "/home/travis/build/ArduPilot/ardupilot/Tools/autotest/common.py", line 3950, in run_one_test
test_function()
File "/home/travis/build/ArduPilot/ardupilot/Tools/autotest/rover.py", line 4216, in test_poly_fence
self.test_fence_upload_timeouts()
File "/home/travis/build/ArduPilot/ardupilot/Tools/autotest/rover.py", line 4057, in test_fence_upload_timeouts
target_component=target_component)
File "/home/travis/build/ArduPilot/ardupilot/Tools/autotest/rover.py", line 4010, in test_fence_upload_timeouts_2
self.expect_request_for_item(item)
File "/home/travis/build/ArduPilot/ardupilot/Tools/autotest/rover.py", line 3958, in expect_request_for_item
raise NotAchievedException("Expected request for seq=%u" % item.seq)
NotAchievedException: Expected request for seq=1
The "AT" timestamps there are wallclock time. Since speedup for Rover
is 8 by default, that could be as much as 1.6 seconds meaning a
re-request from ArduPilot would be legitimate.
I've added some debug, too - we now emit "Sending item with seq=1"
between those two "Got" lines. That should make the problem clearer -
we've received a re-request rather than a request for the item after the
one we've already sent.
On the assumption that ArduPilot processes mavlink packets
synchronously (or at least in order), after we have run a timesync
roundtrip we can reasonably expect any mavlink command we have sent
prior to the roundtrip to have been processed, and we should be able to
see the results in the mavlink stream.
This highlights the fact that fetching rally points using the mission
item protocol does some when you're talking mavlink1 doesn't work out
well.
# so this looks a bit odd; the other end isn't sending
# mavlink2 so can't fill in the extension here.
It was observed from a log of a failed CI test that the ACK from
clearing the rally items took 6 wallclock seconds to arrive.
We were not waiting for that ACK to arrive before sending the request
for the mission item count, but if it has taken more than six seconds
for the ACK to arrive it is reasonable to assume that MISSION_COUNT
could very well take more than the 10 seconds we allow for it.
If we drain the mav before sending the request for the mission count we
should remove any signficiant timing problem due to a backlog of mavlnk
messages, but the amount of traffic here is problematic.
Also drain in lots of other places where we might be spending way too
long parsing messages.