Video instructions and help with filling out and completing What Form 5495 Estimated

Instructions and Help about What Form 5495 Estimated

Hello my name is peter biggle I'm chairing today's RSS webinars I believe well I've got a throne on the line hello bad hi Peter and we have Andrew Gelman lined up as our invited discussant but I would like to encourage anybody any of the audience who wants to join in the Q&A session at the end you can either write your question in which case I'll read it out for you at the end of the presentation you need to use the Q&A icon at the top of the screen or if you do want to actually speak directly then you can press star on your phone to unyu yourself please do remember to mute yourself when you've finished I should say we're grateful to quintiles for their generous sponsorship of this series of journal webinars and I'm particularly grateful to HQ staff Jack and Judith for all the work they've been doing behind the scenes it's a great pleasure to introduce Brad Efron as today's speaker I mean but is extremely well known all around the world is one of our most distinguished statisticians but in particular I think everyone would recognize that Brad in particular was doing data science a long time before the term was invented and he's always been one of the real deep thinkers about the interface between statistics in computer science and the implications of that to the future of our discipline and I'm sure he won't mind me mentioning that he has a book in preparation with Trevor hasty with the captivating title computer age statistical inference and I for one am looking forward to that very much but without further ado right if you're ready to go you'll be in charge of the presentation and then we'll move to Q&A when you when you finish okay thank you Peter okay my thanks to the RFS and to you and to Judith as this is sort of a 21st century discussion paper I guess a notable trend of the past 25 years is the increased use of Bayesian methods and statistical applications this paper concerns the frequentist evaluation of Bayesian estimates and begin to usual way with a family of probabilities just down to these two script F equals F sub U of X where both mu and X so you would usually be high-dimensional and it as soon as we have I'm going to assume we have a parameter or real valued parameter theta equals T of mu that's of special interest that we want to make some inference about and and we've got a prior PI of mu and we want to get an estimate of theta and the usual estimate the Bayesian estimates theta hat in the box the expected value of theta given X is ratio of two integrals and the that would be that's the estimate we're interested in and the question arises is how accurate is Theta hat and the usual answer would be from would be taken from the Bayesian posterior distribution say the standard deviation of theta given X and this is obviously right the right answer if the prior PI of MU is based on genuine past experience but it's not so obvious otherwise most based applications these days use uninformative priors such as Geoffrey's prior odd PI of MU equals the determinant of the Fisher information matrix of the one-half power and there's a danger of circular reasoning here where self selected prior is used to assess its own estimates accuracy and this in this paper we're going to just think of Bayes estimate theta hat as a function of the data X and evaluate its frequentist accuracy so on to the next slide and the this is the main result of the paper it says the general accuracy formula were on slide 3 and the general here means that it's for any prior not necessarily an uninformative one so we have I'm going to assume that X and mu are both in our P dimensional space and that X is unbiased from you with a covariance matrix V of mu and then the crucial quantity is in red or pink there on the third line alpha X of MU is the gradient with respect to X of logs of s of mu of X not not with respect to MU as would be for the usual score function you might call this the Bayesian score function where you take the gradient with respect to X and the key lemma says how the posterior expectation changes as a function of X it says the gradient of theta hat with respect to X is the posterior covariance between T of mu this parameter of special interest and Alpha X of MU given X I'm going to call that Cove hat for short and the theorem there at the bottom is then that the standard the Delta method standard deviation of theta that's frequent of standard deviation is a box at the bottom SD of theta hat equals cove hat transpose the covariance matrix cove hat to the one-half power and now we're on page the next page page four and the nice thing about the theorem is it's easy to implement in practice so suppose suppose we start out with an MC MC sample of size B likes B is going to be 10,000 or something like that EMU's muon star Mewtwo star mu B star this is a sample of the basic vectors of the full parameter vectors and then for each I for each one of those we can take T of mu I star equals T I star for short notation an alpha star equals that alpha thing for the the ice MCMC replication and then the average of the TI stars would be the usual MC MC estimate of theta hat the posterior expectation theta hat equals summation TI star over b and now we can estimate