I'm trying to determine the overall error of a measurement method by comparing it to the true value.

I have a set of data which I'm trying to analyze. I determined the numbers by taking an absolute difference between the true and the measured values and computed the mean and the standard deviation of those differences.

I was trying to show a normal distribution graph with 3 sigma covering 99.7% of data to say that the error for the method that I'm testing is between x and y. For example my mean and stdv is 0.6 +/- 0.5, so my 99.7% data distribution would be -0.9 to 2.1.

Unfortunately, my data turned out to be non-normal. (I also tried to analyze it using true values instead of absolute, which turned our normal and gave me a result of 0 mean +/- 0.8 STDV, but I don't think that this is correct, since I don't care if my errors are positive or negative, I'm just trying to compute an overall error)

I've tried transforming the data, but none of the transformations seem to be working.

My data is right skewed. How do I compute the results so that they are easily understood and can be extrapolated to the use of this method?

My main goal is to say "when measuring something using this method, your error is +/- whatever it is"

I attached the histogram and Q-Q plot.

I really hope that somebody could help me with this.

Thanks