Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

4
  • 1
    $\begingroup$ (1) Was your sample obtained with or without replacement? (2) Selecting items "with probability $1/\pi_i$" won't work (and is not part of the H-T estimator anyway), because obviously--since $\pi_i \le 1$--these reciprocals exceed unity. $\endgroup$ Commented Apr 18, 2012 at 12:55
  • $\begingroup$ @whuber: Thank you. (1) For now, let's assume it's sampling without replacement. (How does the concept of an overall sampling probability apply to sampling with replacement anyway? Wouldn't it be something like expected sampling count?) (2) Thanks, edited. $\endgroup$ Commented Apr 18, 2012 at 14:51
  • $\begingroup$ I suspect what you may mean by "inverse probability weighting during the selection" is to use probability weighting for the bootstrap according to the relative sizes of the $\pi_i$. $\endgroup$ Commented Apr 18, 2012 at 15:48
  • $\begingroup$ @whuber: I am not familiar with terminology, and I only have a very rough imagination about what bootstrapping is: Repeated random selection from a sample (with replacement), and appending the selected items to a "new" dataset. If my sample is weighted, then weighted sampling is applied during the bootstrap. And if the weights are derived from selection probabilities, it's inverse probability weighting. At least that's my understanding, please do correct me if I'm wrong here. -- Does the term "weighted bootstrap" apply for my case? $\endgroup$ Commented Apr 18, 2012 at 15:58