ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (503)
Collection
  • Articles  (503)
Publisher
Years
Journal
Topic
  • 1
    Publication Date: 2020-10-29
    Description: My central interest is decision making in the presence of epistemic uncertainty. A method appropriate for both specialized inquiries and everyday reasoning is based on credal logic, which employs multivalent degrees of belief rather than traditional probability theory. It accounts for epistemic uncertainty as unallocated belief. It holds that, when facing real uncertainty, if a person believes a and believes b, then the person believes a and b together. This brand of multivalent logic underlies and justifies how legal decision makers and the rest of us find facts in a world infused with epistemic uncertainty. Indeed, this article closes by showing the equivalence of multivalent logic and inference to the best explanation. By demonstrating this similarity in reasoning, I aim to shore up our faith in the logic of traditional legal reasoning.
    Print ISSN: 1470-8396
    Electronic ISSN: 1470-840X
    Topics: Mathematics , Law
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2020-09-27
    Description: This article analyses cases where independence between judges’ skills and states of nature affects decision efficiency in terms of the probability of making a correct collective decision, relative to the case where such independence does not exist. This article explains when it is advantageous to include either former defense lawyers who have expertise in obtaining an acquittal of defendants or former prosecutors who have expertise in obtaining a conviction, in a panel of judges.
    Print ISSN: 1470-8396
    Electronic ISSN: 1470-840X
    Topics: Mathematics , Law
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2020-09-17
    Description: The theory of comparative propensity, championed by the late Mike Redmayne, has been an influential theory underpinning normative models of the probative value of evidence of previous convictions in criminal trials. It purports to generalize an approximate probative value by means of a Bayesian model in which the likelihood of an innocent person having a criminal record is calculated by reference to general population statistics, and the hard evidence underpinning the prior probability is treated as unknown. The theory has been criticized on the ground that it fails to take account of bias against past offenders in the selection of cases for prosecution. This article analyses the model and these criticisms and concludes that both the model and the criticisms are flawed because they fail to address the evidence on which the prior odds are based. We find that, not only are such mathematical models unsound, but they can only be ‘repaired’ by making assumptions about the typical case which run counter to the legal presumption of innocence. Analysing the flaws in these models, however, does provide some insight into issues affecting the value of prior convictions evidence.
    Print ISSN: 1470-8396
    Electronic ISSN: 1470-840X
    Topics: Mathematics , Law
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2020-09-16
    Description: Often the expression of a likelihood ratio involves model parameters θ. This fact prompted many researchers to argue that a likelihood ratio should be accompanied by a confidence interval, as one would do when estimating θ itself. We first argue against this, based on our view of the likelihood ratio as a function of our knowledge of the model parameters, rather than being a function of the parameters themselves. There is, however, another interval that can be constructed, and which has been introduced in the literature. This is the interval obtained upon sampling from the so-called ‘posterior likelihood ratio distribution’, after removing, say, the most extreme 5% of a sample from this distribution. Although this construction appears in the literature, its interpretation remained unclear, as explicitly acknowledged in the literature. In this article we provide an interpretation: the posterior likelihood ratio distribution tells us which likelihood ratios we can expect if we were to obtain more information. As such, it can play a role in decision making procedures, for instance about the question whether or not it is worthwhile to try to obtain more data. The posterior likelihood ratio distribution has no relevance for the evidential value of the current data with our current knowledge. We illustrate all this with a number of examples.
    Print ISSN: 1470-8396
    Electronic ISSN: 1470-840X
    Topics: Mathematics , Law
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2020-07-22
    Description: Judges should not be influenced by legally irrelevant circumstances in their legal decision making and judges generally believe that they manage legally irrelevant circumstances well. The purpose of this experimental study was to investigate whether this self-image is correct. Swedish judges (N = 256) read a vignette depicting a case of libel, where a female student had claimed on her blog that she had been sexually harassed by a named male professor. The professor had sued the student for libel and the student retracted her claim during the hearing. Half of the judges received irrelevant information - that the professor himself had been convicted of libel a year earlier, while the other half did not receive this information. For the outcome variable, the judges were asked to state how much compensation the student should pay the professor. Those judges who received information about the professor himself having been convicted of libel stated that he should be given significantly less compensation than those who did not receive the irrelevant information. The results show that the judges’ decision was affected by legally irrelevant circumstances. Implications for research and practice are discussed
    Print ISSN: 1470-8396
    Electronic ISSN: 1470-840X
    Topics: Mathematics , Law
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2020-03-01
    Description: For several decades, legal and scientific scholars have argued that conclusions from forensic examinations should be supported by statistical data and reported within a probabilistic framework. Multiple models have been proposed to quantify and express the probative value of forensic evidence. Unfortunately, the use of statistics to perform inferences in forensic science adds a layer of complexity that most forensic scientists, court offices and lay individuals are not armed to handle. Many applications of statistics to forensic science rely on ad hoc strategies and are not scientifically sound. The opacity of the technical jargon that is used to describe these probabilistic models and their results, and the complexity of the techniques involved make it very difficult for the untrained user to separate the wheat from the chaff. This series of article is intended to help forensic scientists and lawyers recognize limitations and issues in tools proposed to interpret the results of forensic examinations. This article focuses on the tool proposed by the Latent Print Branch of the U.S. Defense Forensic Science Center (DFSC) and called FRStat. In this article, I explore the compatibility of the results outputted by FRStat with the language used by the DFCS to report the conclusions of their fingerprint examinations, as well as the appropriateness of the statistical modelling underpinning the tool and the validation of its performance.
    Print ISSN: 1470-8396
    Electronic ISSN: 1470-840X
    Topics: Mathematics , Law
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2020-03-01
    Description: In the so-called rare type match problem, the discrete characteristics of a crime stain have not been observed in the set of background material. To assess the strength of evidence, two competing statistical hypotheses need to be considered. The formulation of the hypotheses depends on which identification of source question is of interest (Ommen, 2017, Approximate statistical solutions to the forensic identification of source problem. (Phd thesis). South Dakota State University). Assuming that the evidence has been generated according to the beta-binomial model, two quantifications of the value of evidence can be found in the literature, but no clear indication is given when to use either of these. When the likelihood ratio is used to quantify the value of evidence, an estimate is needed for the frequency of the discrete characteristics. The central discussion is about whether or not one of the traces needs to be added to the background material when determining this estimate. In this article it is shown, using fully Bayesian methods, that one of the values of evidence from the literature corresponds to the so-called ‘identification of common source’ problem and the other to the ‘identification of specific source’ problem (Ommen, 2017, Approximate statistical solutions to the forensic identification of source problem. (Phd thesis). South Dakota State University). This means that the question whether or not one of the traces needs to be added to the background material reduces to the question whether a common source or specific source problem is under consideration. The distinction between the two values is especially important for the rare type match problem, since the values of evidence differ most in this situation.
    Print ISSN: 1470-8396
    Electronic ISSN: 1470-840X
    Topics: Mathematics , Law
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2020-03-01
    Description: A proposed rule announced by the Office of Federal Contract Compliance describing the way statistical tests will be used in compliance reviews led to the Chamber of Commerce filing a formal Comment. The comment raises several statistical issues, including the proper analysis of stratified data and the effect of large samples on tests of significance. The Chamber correctly pointed out that simple pooling of the data into one large sample can lead to misleading conclusions, so an appropriate analysis, combining the results of statistical analyses of the individual strata into an overall estimate and statistical test is described. Both the proposal and Comment state that practical significance should be considered but do not provide a clear definition of the term, although various definitions are referred to. Two alternative approaches to evaluating the practical significance are described. One assesses the financial impact of the disparity on a typical wage earner, while the second considers the number of employees affected by the disparity and estimates the effect of the disparity on their earnings during their expected time of employment.
    Print ISSN: 1470-8396
    Electronic ISSN: 1470-840X
    Topics: Mathematics , Law
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2020-03-01
    Description: I critically discuss a recent suggestion in Nance (Belief Functions and Burdens of Proof. Law, Probability and Risk, 18:53–76, 2018) concerning the question which ratios of beliefs are appropriate when in criminal or civil cases one works with belief functions instead of classical probabilities. I do not call into question the use of belief functions themselves in this context, and I agree with in Nance (Belief Functions and Burdens of Proof. Law, Probability and Risk, 18:53–76, 2018) that so-called ‘uncommitted support’, possible in the framework of belief functions, should not be taken into account in a decision-theoretic framework. However, I argue against in Nance (Belief Functions and Burdens of Proof. Law, Probability and Risk, 18:53–76, 2018) in that, at least in criminal law, relative sizes of beliefs should not be used for decision-making at all. I will argue that only the individual, absolute beliefs should be considered. Since belief functions generalize classical probabilities, this position seems at first sight to conflict with the fact that odds are abundant when we use classical probabilities in a legal context. I will take the opportunity, then, to point out that also in the classical setting, odds are not our primary concern either. They are convenient since they appear, together with the likelihood ratio, in the odds form of Bayes’ rule. Apart from that, they do not have any individual significance. I also note that in civil law the conclusions might be different.
    Print ISSN: 1470-8396
    Electronic ISSN: 1470-840X
    Topics: Mathematics , Law
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2020-03-01
    Description: For several decades, legal and scientific scholars have argued that conclusions from forensic examinations should be supported by statistical data and reported within a probabilistic framework. Multiple models have been proposed to quantify and express the probative value of forensic evidence. Unfortunately, the use of statistics to perform inferences in forensic science adds a layer of complexity that most forensic scientists, court officers and lay individuals are not armed to handle. Many applications of statistics to forensic science rely on ad-hoc strategies and are not scientifically sound. The opacity of the technical jargon used to describe probabilistic models and their results, and the complexity of the techniques involved make it very difficult for the untrained user to separate the wheat from the chaff. This series of papers is intended to help forensic scientists and lawyers recognize limitations and issues in tools proposed to interpret the results of forensic examinations. This article focuses on tools that have been proposed to leverage the use of similarity scores to assess the probative value of forensic findings. We call this family of tools ‘score-based likelihood ratios’. In this article, we present the fundamental concepts on which these tools are built, we describe some specific members of this family of tools, and we compare them explore to the Bayes factor through an intuitive geometrical approach and through simulations. Finally, we discuss their validation and their potential usefulness as a decision-making tool in forensic science.
    Print ISSN: 1470-8396
    Electronic ISSN: 1470-840X
    Topics: Mathematics , Law
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...