Our group is doing a lot of uncertainty analysis on our measurements lately and I have a question that I wasn't able to find an answer to. We have tests to verify step attenuation levels in our test fixture. If we're taking differences between an initial reference measurement and the different levels of attenuation, how does this alter the uncertainty that would typically be seen in the GUM style uncertainty budgets for power measurements? Would mismatch, instrument, linearity, etc. be included using their typical values?