In this population-based contemporary study of breast cancer, including more than 2000 prospectively assessed breast tumours, we show that there was poor agreement between different surrogate definitions of luminal tumour subtypes and molecular PAM50 (Parker algorithm) [4] subtyping. Considerably more tumours were Luminal A as determined by MS than by the surrogate classifications. However, one should not expect to find perfect surrogates for the molecular subtypes since the surrogate algorithms are based on IHC assessment of protein levels, whereas the MS is based on the measurement of mRNA transcript levels of corresponding genes (ESR1, PGR and MKI67) as well as of additional genes within the PAM50 panel, depicting the underlying biological signalling pathways. By simply designating HG1-2 as Luminal A-like and HG3 as Luminal B-like, the agreement with MS was superior to all of the surrogate classifications presented in this paper. The discriminatory importance of HG is not unexpected, being reported both in a previous study from our research group and by Maisonneuve et al. [12, 13].
The Agreement Se Lund Pdf Download
The reproducibility of HG has, however, been shown to be only moderate [23]. Regarding Ki67, the issues of inter-laboratory variability and cut-off levels are also well known [24, 25]. In an exploratory analysis, we used the same percentiles for low, intermediate and high Ki67 as in the study by Maisonneuve et al. [12], by which the agreement with MS improved (from 66 to 73%). This emphasises the importance of critical reflection over Ki67 cut-offs in the local laboratory. Moreover, only 47% of the tumours in the high-Ki67 category were shown to be Luminal B according to MS, which raises the question of whether a cut-off of 20% is too low to be able to identify Luminal B-like tumours and thereby identify patients with hormone receptor-positive tumours who might benefit from additional adjuvant chemotherapy. We found no added value of PR in the surrogate classifications. However, since there were no follow-up data available for this cohort, we were not able to evaluate PR as a prognostic marker.
When considering the lack of agreement in the results presented here, one should also be aware of the concerns regarding agreement between different multigene tests. Bartlett et al. compared different multiparameter tests regarding risk classification and intrinsic subtyping and found a rate of discordant results among Prosigna, Blueprint and MammaTyper as high as 41% for tumour subtyping [26]. Similar discordant results from the application and comparison of different commercial gene signatures on RNA sequencing data were found in our own study of the SCAN-B cohort (Vallon-Christersson et al. submitted).
The present study indicates poor agreement between surrogate classifications and MS of luminal breast cancer tumours. By combining HG and Ki67, a large subgroup of the patients could be identified as Luminal A assessed by MS. This group of patients may not benefit from the use of molecular assays, especially if other clinicopathological factors indicate a low risk of recurrence. We are aware of the issue regarding the poor reproducibility of Ki67 and HG assessments, favouring MS. Nonetheless, the results of this study offer new insights into how to use MS in combination with histopathological markers in a clinical context, but further studies including adequate follow-up data are needed to correlate these findings with patient outcome.
One condition for the handling of all agreements is that the person responsible for an agreement is to be informed about the agreement situation and assess whether the agreement is acceptable from an organisational perspective. As support for this review, you can follow our guide for agreement reviews. There are also several template agreements to use in different situations. See more below.
A well-composed agreement should reflect the situation in which it is to be used. Therefore, all agreements will also be different. However, in order to facilitate the process, we have developed template agreements for the most frequent agreement types at the University, click on the links below. If you have any questions on how to use the template agreements, please contact the Legal Division.
The person authorised to sign an agreement, in accordance with the regulations above, must always review the content of the agreement and decide whether the University should enter into it or not. This applies regardless of whether the agreement has been reviewed by a legal counsel or not. Agreements always contain material conditions which cannot be assessed by the Legal Division. The project manager is to make sure that there is sufficient supporting documentation, such as financial documentation, documented support for the project at department/faculty level and any comments from the legal counsel responsible for the matter.
When working with an agreement, for example in the context of a collaboration with a company in which an employee has business interests, you may need to discuss issues of conflict of interest and secondary employment. You will also have to adhere to the rules that apply when companies with ties to University staff are engaged. See the guidelines below.
Mammograms and epidemiological data were collected from 987 women, aged 55 to 71 years, attending the Norwegian Breast Cancer Screening Program. Two readers each classified the mammograms according to a quantitative method (Cumulus or Madena software) and one reader according to two qualitative methods (Wolfe and Tabár patterns). Mammograms classified in the reader-specific upper quartile of percentage density, Wolfe's P2 and DY patterns, or Tabár's IV and V patterns, were categorized as high-risk density patterns and the remaining mammograms as low-risk density patterns. We calculated intra-reader and inter-reader agreement and estimated prevalence odds ratios of having high-risk mammographic density patterns according to selected risk factors for breast cancer.
The Pearson correlation coefficient was 0.86 for the two quantitative density measurements. There was moderate agreement between the Wolfe and Tabár classifications (Kappa = 0.51; 95% confidence interval 0.46 to 0.56). Age at screening, number of children and body mass index (BMI) showed a statistically significant inverse relationship with high-risk density patterns for all four methods (all P
The Mammography and Breast Cancer Study is a research project using mammographic density patterns as surrogate endpoints for breast cancer among postmenopausal women attending the Norwegian Breast Cancer Screening Program in Tromsø. The purpose of this study was to classify mammograms according to four methods and to examine their agreement and their relationship to selected risk factors for breast cancer.
We calculated the Pearson's correlation coefficient and the crude and weighted Kappa statistics to test the intra-reader and inter-reader reliabilities for the mammographic readings. The Kappa coefficient does not require any assumption about 'correct' categorization and includes a correction for the amount of agreement that would be expected by chance alone. A Kappa of 0% indicates that the agreement between two measurements is no greater than would be expected by chance. Kappa values of 50% or more indicate moderate agreement, 60 to 80% good agreement, those over 80% very good agreement, and those over 90% excellent agreement [39].
The Pearson correlation coefficient was 0.93 and 0.86 for the repeated quantitative readings conducted 3 months (GM) and 18 months (GU) apart, respectively. The intra-rater agreement for the upper reader-specific quartile versus the three lower quartiles was moderate for reader GM (Kappa = 0.59; 95% CI 0.29 to 0.90) and reader GU (Kappa = 0.59; 95% CI 0.29 to 0.90). For reader NB the intra-reader agreement was good for the Wolfe classification (Kappa = 0.61; 95% CI 0.34 to 0.89) and very good for the Tabár classification (Kappa = 0.89; 95% CI 0.69 to 1.00). The Pearson correlation coefficient was 0.86 for the two original percentage density readings and 0.93 for the subset of 189 mammograms. Both the agreements between the reader-specific upper quartiles (crude Kappa = 0.69; 95% CI 0.64 to 0.74) and between all four quartiles (weighted Kappa = 0.71; 95% CI 0.68 to 0.74) were good. The agreement between high-risk and low-risk Wolfe and Tabár patterns was moderate (crude Kappa = 0.51; 95% CI 0.46 to 0.56).
The Kappa values for inter-rater agreement in our study are of the same magnitude as or better than those found in several other studies examining different kinds of mammographic reading. Venta and colleagues found a Kappa value of 0.46 for density measurements on X-ray and digital mammograms recorded by two radiologists [41]. In a previous study including more than 3,500 premenopausal and postmenopausal women, we found the overall agreement between high-risk and low-risk for the Wolfe and Tabár classifications to be poor (Kappa = 0.22) [3] in comparison with the moderate agreement (Kappa = 0.51) in the present study. The two classifications are not strictly independent because one reader performed both assessments. We attribute the latter higher Kappa value to the fact that all women were postmenopausal, resulting in more low-density patterns, which are easier to assess.
The present results are in agreement with studies finding no associations between ages at menarche [44] and menopause [47] and unfavorable density patterns, but not with others [8, 45]. We found a positive association between age at menarche and unfavorable density patterns among premenopausal women and an inverse association among postmenopausal women [8]. In two other studies, a positive overall association was revealed [45, 47]. In the study by El-Bastawissi and colleagues, age at menopause was positively related to unfavorable patterns [45]. In our previous study the same relationship was found to be of borderline significance [8]. 2ff7e9595c
Comments