On Nov 18, 2018, at 4:31 PM, Taylor Fausak <taylor@fausak.me> wrote:Oops, the ordering of the answer choices is manual because some questions have a natural order while others should just be most to least popular. I've made another run through to make sure everything is sorted properly. I'll probably hit publish in the next half hour or so unless there are any objections.On Sun, Nov 18, 2018, at 3:07 PM, Gershom B wrote:The language extensions section doesn’t appear to be sorted properly. Outside of that, I think that these results are looking much better and any effort to find any additional outliers is probably not worth it for the moment. Thanks for your work on this, and I appreciate you being responsive and attentive when problems with the data were pointed out. There’s certainly some interesting and helpful information to be gleaned from this data.Cheers,GershomOn November 18, 2018 at 2:55:10 PM, Taylor Fausak (taylor@fausak.me) wrote:
Ok, I updated the function that checks for bad responses, re-ran the script, and updated the announcement along with all the assets (charts, tables, and CSV). Hopefully it's the last time, as I can't justify spending much more time on this.On Sun, Nov 18, 2018, at 2:32 PM, Michael Snoyman wrote:Just wanted to add in: good catch Gershom on identifying the problem, and thank you Taylor for working to remove them from the report.On 18 Nov 2018, at 21:17, Taylor Fausak <taylor@fausak.me> wrote:Great catch, Gershom! There are indeed about 300 responses that tick all the boxes except for disliking the new GHC release schedule. The main thing the attacker seemed to be interested in was over-representing Stack and Stackage. Also, bizarrely, Java.That brings the number of bogus responses up to 3,735, which puts the number of legitimate responses at 1,361. For context, last year's survey asked far fewer questions and had 1,335 responses.On Sun, Nov 18, 2018, at 1:26 PM, Imants Cekusins wrote:What if the announcement mentioned a large number of potentially bogus responses, explained the grounds for this conclusion, with a new survey conducted early next year?The next survey would then need to be done differently from this one somehow. To improve the reliability, some authentication may be necessary.Maybe Stack, Cabal questions could be grouped as separate distinct surveys, conducted by their maintainers through own channels?Not sure how much value is in exact numbers of users of Stack or Cabal. Both groups are large enough. The maintainers of both groups are aware about usage stats.Is either library likely to be influenced by this survey?_______________________________________________Haskell-community mailing list_______________________________________________Haskell-community mailing list_______________________________________________
Haskell-community mailing list
Haskell-community@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-community_______________________________________________
Haskell-community mailing list
Haskell-community@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-community