Thursday, December 20, 2012

#AER12 Round-up

by Anne Meade, Senior Manager for Website and Social Media

While the PRIM&R Blog Squad provided helpful insight into many of the sessions and panels  presented at the 2012 Advancing Ethical Research Conference, a lot information was also shared via Twitter!  Here’s how the conference was seen through the eyes of our tweeting attendees:

@djeneen: In #SanDiego for the #AER12 conference. Doesn't start until tomorrow but already talking shop in my hotel lobby with someone from Chicago!

@antinats: What about engagement for Internet-based clinical trials in a box? #AER12 #Bioethics #regs

@djcontraption: Closing the 1st day @ #AER12-So many sharp, committed, & caring voices here, trying to figure out ways to marry best thought & best practice

@djeneen: Great First Time Attendees breakfast this morning at #AER12. Met folks from New Zealand & Zimbabwe!

@averyavra: Ioannidis: I think it is probably unethical to run a new study if we don't know what we already know. #AER12 @primrconnect

@primrconnect: Ioannidis: We should make sure we register ALL trials and publish ALL results. #AER12

@antinats: As learned yesterday, crowd sourcing & data-sharing can be great, but consider potential privacy/reidentification/linking. #AER12

@djcontraption: So far, serendipity the rule of the day @#AER12. From videoconference book clubs to lunchtable conversations, engaging interactions abound.

@redshutters: #AER12 Global research ethics plenary - "India has a strong culture of activism which has raised difficult questions" about research

@primrconnect: James Gavin III: No one single factor is responsible for poorer health outcomes along populations of color. #AER12

@GUCancerNews: @primrconnect #AER12 as a woman married to Hispanic Dr Gavin's message is important to my household - minority disparity in research

PaulaKimTRAC: Thanks to all for great session on Front Door consents @primrconnect #AER12 conference. #Biospecimens are important for research progress.

@DavidrVH: Great talk by J Gavin; one observation fr this writer, face--to--face seems to work best for trial recruitment, for all populations #AER12

@antinats: "Reality is where our consciousness is located" #AER12 #secondlife

@primrconnect: Botkin: As a general rule, investigators have no motivation to re-identify de-identified data. #AER12 (Great Debate)

@Virgotex: @primrconnect this is definitely best IRB-related conference I've attended. Well done! #AER12

@GUCancerNews: @primrconnect #AER12 things that have hurt me- force and stigma, Elyn Saks. Providing amazing insight to mental illness

@BWBJD: Change starts with one person. Thank you Elyn Saks (author, The Center Cannot Hold). #AER12

@antinats: Many challenges in deciding whether an activity is QA/QI/etc or human subjects research...Glad it's not just me! #AER12

@mattstafford: #AER12 HIPAA: misplaced moral protection. Authorization required for research but NOT for marketing(!!!)

@djcontraption: Heading out from #AER12. A takeaway-stay creative in how to maximize opportunity for autonomy while remaining attentive to vulnerability.

@jcm57: Best quote of #AER12 -- "I'm an emergency room doctor by training but I know enough about statistics to be dangerous"

It was great to see so many of our attendees experiencing the conference through social media. Be sure to follow PRIM&R on Twitter, LinkedIn, and Facebook!  And, see you all at #AER13

Wednesday, December 19, 2012

Applying lessons learned in Guatemala to research today: Exploring the PCSBI Study Guide

by Karen M. Meagher, PhD, senior policy and research analyst at the Presidential Commission for the Study of Bioethical Issues

The Presidential Commission for the Study of Bioethical Issues (PCSBI) has consistently noted the marked need for effective ethics education. With this commitment in mind, the Commission recently released a companion study guide to its 2011 historical investigation report, “Ethically Impossible” STD Research in Guatemala from 1946 to 1948, in which the Commission detailed the egregious treatment of vulnerable populations in Guatemala by researchers during the 1940s. The new companion piece, A Study Guide to “Ethically Impossible” STD Research in Guatemala from 1946 to 1948 serves as a supplement for instruction in existing bioethics courses and seeks to provide resources to instructors teaching responsible conduct of research (RCR) courses. Given this wide audience, the Study Guide includes discussion questions and suggested readings appropriate to undergraduate and graduate-level coursework, and allows instructors to use the report and related materials to illustrate the ethics topics of their choice.

In the now well-documented Guatemala case, the U.S. Public Health Service approved research that exposed individuals to STDs and then failed to subsequently treat all of them. Researchers drew from susceptible populations including prisoners and the mentally ill; intentionally deceived some subjects about the nature of the study and what was being done to them; and there is no record of any of the subjects giving consent.

For the purposes of teaching ethics, the challenge facing instructors is to translate this historical event into lessons pertinent to the experience of modern day researchers. The set of case studies provided by the Study Guide is accessible to the average student and promotes guided ethics discussion based on real world examples and historical documents. The Study Guide takes students through difficult questions that arise when it is necessary to make moral assessments about unethical events in the distant past and applies such lessons to the present. The Study Guide covers various topics that can be incorporated into ethics courses as a whole, or independently as individual modules. These topics include: research with vulnerable populations; issues of race, consent and deception; ethical aspects of informed consent; and ethical aspects of methodological design and publication.

For existing bioethics curricula, the Study Guide enables instructors to expose students to a period in U.S. history with resources that lend the case context, including how researchers planned and carried out the Guatemala experiments. The inclusion of primary sources right in the text of the Study Guide allows students to see an example of an historical investigation and its place within the interdisciplinary practices of bioethicists. Instructors new to teaching ethics will find an array of materials to choose from, starting with a sampling of basic research ethics texts and resources to introduce students to the work of previous presidential bioethics commissions and the ethical foundations of research regulation. Subsequent sections allow instructors to pick and choose topics that fit with the course design they have in mind.

By extrapolating to ethical issues beyond the individual case and encouraging students to view contemporary research in the same light, students are introduced not only to the ethical errors of the past, but also to the method of casuistry and a particular form of moral reasoning; students reflect on what makes this case immoral and what makes it similar to, or different from, other cases. Recommended readings span a wide range, allowing instructors to engage students on questions that bridge the highly theoretical (e.g., whether our moral concepts are stable enough to allow for retrospective judgment of the past) and the highly practical (e.g., when an individual can give consent on behalf of another). As a result, the resources included speak to a broad audience within research ethics education, providing a jumping off point for some and a much-needed starting place for others.

Monday, December 17, 2012

Embrace diversity in every way we can: An interview with Eric Allen

Today we’d like to introduce you to Eric Allen, MLS, CIP®, CPIA®, who serves as a member of PRIM&R’s Diversity Advisory Group (DAG). 

Eric Allen has been a PRIM&R member for nine years. He received his bachelor’s degree in exercise and sports science from Greensboro College, and holds a master’s degree in liberal studies from the University of North Carolina at Greensboro. He has worked in the area of research administration for well over a decade at various universities, and spent nine years as the director for the Office of Research Compliance at the University of North Carolina at Greensboro. Currently, Eric is the associate director of consulting services for the HRP Consulting Group. At HRP, Eric assists institutions with obtaining accreditation or re-accreditation, conducting program evaluations, and developing policies and procedures. He provides regulatory and ethical advice/guidance and is an improvement strategist. 

Joanna Cardinal (JC): When and why did you join the field? 
Eric Allen (EA): I originally joined the field of research ethics in 2000. This was a transitional point for me as I was ending my career as a commercial personal trainer. A friend of mine indicated that they needed a person to take on a new area in the research office, involving research compliance. It sounded like a fun and an interesting challenge, so I went for it.

JC: What skills are particularly helpful in a job like yours? 
EA: There are three critical skills for being successful in this field: 1) resilience, 2) patience, and 3) attention to detail. Additionally, if you're good with people, the job will suit you very well.

JC: Tell us about one or more articles, books, or documents that have influenced your professional life, or that you feel are particularly relevant to the field.
EA: There are a few worth mentioning:

  1. The article Institutional Review Board (IRB) Mission Creep: The Common Rule, Social Science, and The Nanny State by Ronald White motivated me to improve the knowledge level and perception of research ethics for investigators. This article stimulated me to find creative ways to change the culture at my institution around the topic of research ethics.
  2. The Menlo Report: Ethical Principles Guiding Information and Communication Technology Research forced me to realize that we are living in an evolving world where thinking out of the box is a necessity. In this field, we must consider how the research and teaching activities we oversee, condone, or approve will affect society, the environment, and the advancement of research in the future.
  3. Finally, IRB Management and Function and its accompanying member handbook were not only instrumental in assisting me with obtaining my CIP certification, but also provided me with a deeper understanding of my work and the work of my colleagues in the research enterprise. 
These materials were useful in committee members and staff. They were also helpful in clarifying job descriptions and responsibilities for human resources. Overall, these were the most influential documents for me professionally because they challenged me to constantly improve and always set new goals. 

JC: Have there been any PRIM&R events or talks that you have attended that have significantly impacted your approach to your work? If so, what were they and how did they influence you? 
EA: One of the most memorable PRIM&R events that I recall was presented by Melissa Lewis who did a hilarious skit about IRB meetings. It's amazing how funny the truth really is. Following the skit, she shared several stories that I found very clever and amusing. I was able to use the skit as an educational training tool for my compliance committees. I was also able to use this skit as justification to the institutional official to consider a new process for electing committee members. Since watching that skit I have become a professional people watcher which has made me more aware of individuals that hide in the group and I have developed ways to incorporate them into the discussion. Melissa is a great teacher and has helped me with my presentation skills. Her dynamic, yet simple approach has a way of sticking with you and continually influencing your actions, thoughts, and communication. 

JC: How has membership in PRIM&R’s community of research ethics professionals helped you to advance in your career? 
EA: At my first conference, I was overwhelmed by how many people were actually in the field and how helpful they were. In particular, I was struck by the enthusiasm, bright smile, wit, and charm of Ada Sue Selwitz. Oh, and you can’t forget her distinctive southern drawl! Ada Sue was very easy to talk to and extremely helpful. She was able to translate what was going on because I was foreign to the research ethics lingo. Jeff Cohen was a true pleasure to meet as well. I was impressed by his vast knowledge of the regulations and his ability to connect with people one-on-one, or on stage in front of thousands as a plenary speaker. Both Jeff and Ade Sue have been instrumental in helping me define myself in the field of research ethics and develop as a professional in this field. 

JC: Why is the issue of diversity important to you?
EA: Diversity is important to me because the world is diverse and to truly conduct meaningful research the results should help everyone. The Belmont Report extensively addresses sharing both the burdens/risks and benefits, as well as doing the right thing for the right reasons. To make the world a better place we need to embrace diversity in every way we can. Not only is diversity good karma, it goes along with the fundamental theory that all men are created equal. 

JC: Why did you agree to serve on PRIM&R’s DAG? 
EA: I agreed to participate in DAG because I felt it would give me the opportunity to broaden the horizons of others. I am a firm believer in each one teach one and with this opportunity I will be able to get people fired up about research ethics, research, and expanding the PRIM&R community. 

JC: What would you suggest to readers who are looking to strengthen the diversity of their institution, organization, or company?
EA: I would suggest looking at the community, population, or audience you serve and asking yourself one question: Is my institution representative of that group of people? To strengthen diversity, including individuals from both current and future target markets/groups or areas of interests increases your ability to achieve success. 

JC: What advice have you found most helpful in your career? 
EA: In the words of Benjamin Franklin, “Without continual growth and progress, such words as improvement, achievement, and success have no meaning.” 

JC: What is something you know now that you wish someone had told you when you first entered this field? Or, what is an example(s) of a lesson you had to learn the hard way? 
EA: I wish someone would have told me that this community of professionals, including government agencies, is all about helping others.

Friday, December 14, 2012

Would you post your genome online? A debate about privacy risks in genomic research

by Jackie Tekiela, MS, CIP, Institutional Review Board (IRB) Administrator at Wheaton Franciscan Healthcare

I have to admit, I love a great debate, and the session titled A Great Debate: Be it Resolved That Large-Scale Genomic Research Poses Special Privacy Risks to Research Subjects at the 2012 Advancing Ethical Research Conference was no exception. The session explored the ethical issues and privacy concerns related to genetic information identified through research, such as large-scale genome sequence data. A considerable portion of the debate focused on whether or not genetic data was actually identifiable.

Jeffrey R. Botkin, MD, MPH, took the position that genetic information itself is not identifiable and genetic data does not pose greater risks to privacy than other research data. He suggested that current regulations are adequate to protect the privacy of research participants involved in genetic research.

The question, Dr. Botkin argued, is not whether or not identification was feasible. Current regulation and guidance requires that the risk of individual identification be low, but not zero. While DNA sequences could be used to identify individuals, a sequence alone is unlikely to predict health information. The potential risks comes with the ability to link a sequence to reference databases, which are not readily available. Additionally, re-identification of a sequence is not trivial and requires significant expertise and motivation.

In closing, Dr. Botkin noted that, while genomic data doesn’t pose special privacy risks, there is a need to balance the use of genomic data—like any other data—with human subject protections. He also acknowledged that it is necessary to evaluate safeguards as technology progresses. In addition, he stressed the importance of not over emphasizing risks to possible subjects and working to de-stigmatize genomics research.

For the opposing argument, Latanya Sweeney, PhD, proposed that there are unique risks inherent in genomic research. Dr. Sweeney argued that genetic material cannot be de-identified, can lead to direct harm to research participants, and that additional steps should be considered to safeguard privacy of participants in genomic research. Before electronic records were commonplace, it was a believed that demographic data could not be identified. Dr. Sweeney has shown that 87% of the population is identifiable by zip code, birth date, and gender alone.

The threats of re-identification of genomic data are real. Dr. Sweeney provided the example of a 1996 study in which one third of Fortune 500 companies reported using medical information when making hiring or firing decisions. Life insurance discrimination (which is not prohibited by the Gentical Information Nondiscrimination Act, or GINA) could also occur if genomic data is used inappropriately. Dr. Sweeney also cited possible concerns as genomic research continues to progress, such as courts requiring researchers to use genomic databases as reference databases for identifying individuals. 

Dr. Sweeney also gave a number of examples of how individuals are willing to share data and how “big data” can be used to identify individuals. She noted that people’s expectations about privacy are changing–-a shift that will force us to consider what informed consent truly means.

Both debaters presented some thought-provoking points to consider as the field of genomic research moves forward. Many related issues abound in the field of internet research, as discussed by fellow blogger Andrea Johnson in Was that anonymous internet survey really anonymous?

Schizophrenia, stigma, and research harms: More questions than answers

by Rebecca Boxhorn, JD, Research Associate at the Consortium on Law and Values in Health, Environment & the Life Sciences at the University of Minnesota 

In her Pillars of PRIM&R Keynote Address at the 2012 Advancing Ethical Research Conference, Professor Elyn Saks gave a moving and thoughtful description of her struggles with schizophrenia and how her experiences might shed light on research ethics. Professor Saks read excerpts from her nationally bestselling memoir The Center Cannot Hold: My Journey Through Madness, which portrays the symptoms and stigmas of “grave” schizophrenia. Saks described a particularly striking account of a breakdown she suffered during her studies at Yale Law School. As a law school graduate myself, I can attest that the process is difficult enough without psychotic episodes, paranoid delusions, and hallucinations. Saks, however, faced an even greater burden with her illness in addition to the rigors of the legal academy.

Beyond the recounting of her struggles, however, Saks made persuasive arguments for ethical and compassionate treatment and research of individuals with psychiatric disorders. Saks argued that she and others like her “are not schizophrenics, but people with schizophrenia.” The “otherization” of those with psychiatric disorders damages psychiatric patients and psychiatric research. Saks made cogent arguments for how her and others’ struggles can inform research ethics. One suggestion in particular seemed to strike a chord with the PRIM&R audience. Mental illness, unlike other medical conditions, faces an extreme burden of societal stigma, including the perception that those with these disorders are somehow “less than whole people.” To advance ethical research and treatment of individuals with such disorders, we need to eliminate the stigma.

In my work on return of research results and incidental findings, I often encounter concerns about stigma resulting from genetic and genomic research. A recent and illustrative example is the experience of the Havasupai Tribe of Arizona and the future-use of DNA research performed on them without their consent. After filing a lawsuit in 2004, the Havasupai members received a settlement of $700,000 from Arizona State University for harms resulting from the unauthorized study of traits carrying heavy cultural stigma, including inbreeding and schizophrenia. More generally, stigmatization and discrimination are often identified as tangible potential harms resulting from genetic and other types of research.

The refusal to participate in research based on resulting stigma is, of course, understandable. I wonder, however, how research is hampered by subjects seeking to avoid stigma Might objecting to research on stigmatized conditions, in fact, further embed these stigmas by slowing the progress of treatment and societal understanding of the conditions in question? What role do researchers play in eliminating stigma? How does that role relate to researchers’ obligations to their subjects? Are there ways to reveal results to individuals while also mitigating stigma for the individual? If so, what role should stigma play in evaluating the potential harm to participants? Professor Saks’ speech left me with more questions than answers, but in the best way possible. Perhaps the research community should endeavor not only to protect subjects from stigma, but also to eliminate it entirely.

Compensation for research injury: Why the US isn’t a world leader

by Julie Fine, BS, Legal Specialist, Legal Division, Worldwide Research and Development, Pfizer Inc. (Please note: The views presented here are my own and do not reflect the positions or policies of Pfizer Inc.) 

“Medical care generally makes one whole again–before the injury occurred--but some economical expenses ethically ought to be considered.” - Elizabeth Pike

Still interested in learning more about compensation for research-related injury, I attended Panel XII: Compensation for Research-Related Injury: Is it Finally Time for a Nationalized System? on the final day of the 2012 Advancing Ethical Research Conference. The session was moderated by Daniel Nelson, MSc, from the University of North Carolina, Chapel Hill, and the panel included attorney Elizabeth (Lizzy) Pike, JD, LLM, former post-doc at the National Institutes of Health; Efthimios Parasidis, MBE, JD, from Saint Louis University; and Karen Moe, PhD, from the University Washington.

Lizzy provided some history and perspective on U.S. views regarding ethical justifications for compensation, including:
  • Distributive justice (equitable risk-benefit) 
  • Compensatory justice
  • Internalization of costs (weigh true costs) 
  • Trust in research 
  • Reciprocity 
She then outlined two prevailing counterarguments against these justifications:
  1. Subject assumes risks by way of informed consent; and 
  2. The high cost of compensation given limited research funds. 
Lizzy believes there has been so little progress around this issue in the U.S. because of:
  • Lack of political will; 
  • Legal challenges to implementation; 
  • Incorrect assumptions about lawsuits, and 
  • Lack of empirical data. 
Her last point segued perfectly into Efthimios’ presentation on a study being conducted by Saint Louis University School of Law. Two hundred of the top funded research institutions participated in an examination of their research injury compensation policies. Four policy models were identified from the preliminary data:
  • Model 1: Policy does not require the consent form to state that compensation will be provided 
  • Model 2: Policy requires that consent form states that compensation is at the discretion of sponsor or institution 
  • Model 3: Policy requires that the consent form state that compensation is available as long as certain conditions are met (i.e., if you can’t pay, if sponsor pays) 
  • Model 4: Policy requires that compensation be paid 
Karen concluded the panel by describing the University of Washington’s Human Subjects Assistance Program. In place since 1982, the program was motivated by a combination of risk management and ethical principles. She explained that the program works because of its creative approach to risk management, the commitment and support of the University of Washington School of Medicine and units across the University, and its simple implementation for researchers and subjects.

During the question and answer portion of the panel, specific models for compensation programs were discussed. Universal healthcare coverage appears to make it easier for countries to support national compensation systems. The U.S. military has a closed healthcare system that works for providing treatment and compensation for research injuries, but it’s unclear whether the Patient Protection and Affordable Care Act will make it easier to require that medical care costs for research injury be paid across the board. The presenters suggested that a worker’s compensation type system might also be effective.

Ultimately, as the session concluded, the fundamental question remained: Should research subjects be entitled to compensation? What do you think?

I really enjoyed my first Advancing Ethical Research conference and sharing my experience here on Ampersand. Thank you all for reading!

That’s an investigational device? Really?!?

by Andrea Johnson, JD, CIP, Regulatory Specialist in the Research Integrity Office at Oregon Health and Science University 

Personalized medicine is an exciting and growing focus of clinical research. It frequently involves the use of an in vitro diagnostic test (IVD) to identify biological or genetic characteristics about an individual that can predict how a person will respond to different available therapies. This technology may allow providers to determine the best course of treatment for that individual’s disease. IVDs fall within the Food and Drug Administration’s (FDA) definition of medical devices, but the regulatory requirements for their use in the context of a clinical trial are not always immediately apparent. As a result, clinical studies of personalized medicine techniques can create a host of challenging regulatory issues for investigators and institutional review boards (IRBs) alike.

On the final day of the 2012 Advancing Ethical Research Conference, I had the privilege of attending the session titled, In Vitro Diagnostic Devices Used as Integral Parts of Therapeutic Clinical Trials and Companion Devices: IRB Issues and Significant Risk Determinations, presented by Elizabeth Mansfield, PhD, of the FDA and J. Milburn Jessup, MD, of the National Cancer Institute (NCI). This topic has become an important one for me in my current job, and I could write a book discussing my thoughts on this session, but I’ll stick to the highlights here.

First, one of the biggest challenges with IVD studies is simply being able to identify them. Investigators tend not to think of lab tests that determine, for instance, study eligibility as investigational devices. When asked whether the IVDs in the study are approved or cleared by FDA, they might respond with something like, “Well, we use these all the time in clinical practice.” As today’s session made clear, using a test in clinical practice doesn’t mean it is FDA approved. Dr. Mansfield provided the following advice:

  • Search FDA’s online databases to determine whether an IVD is approved or cleared for the proposed use in the study. 
  • Keep in mind that a single IVD consists of all of the components needed to carry out that intended use, including reagents, machines, software, etc. For example, if I’m looking for biomarker XYZ, the IVD includes the entire collection of materials and equipment that I need to test a blood sample for XYZ. 

Second, a key challenge for IRBs in reviewing IVD studies is determining the level of risk (Significant Risk or Nonsignificant Risk) posed by the device. The question is essentially, “What is the risk to the subject if the IVD test result is inaccurate?” In other words, what is the risk that the test may produce inaccurate findings with the result that an investigator provides harmful drugs to subjects. Dr. Jessup’s recommendations for IRBs included:

  • Requesting information on the IVD that details the chances of false negatives or false positives and explains what measures are in place to mitigate that risk. 
  • Including clinical assay developers as IRB members to help inform these determinations. 

Finally, I was very pleased to hear that the FDA is working on new guidance regarding IVDs, which is due out next year!

Smorgasbord Thursday: Some fun, some food for thought, some learning

by Susan Trinidad, MA, Research Scientist in the Department of Bioethics & Humanities at the Center for Genomics & Healthcare Equality at the University of Washington

Before I get to the meat of this post, I want to dispel any sense you may have that we were dour and serious all last week in San Diego – despite the grievous lack of sunshine. Each morning, we were favored with the musical stylings of the inimitable duo of Michele Russell-Einhorn, senior director of the Office for Human Research Studies at the Dana-Farber Cancer Institute, and Ivor Pritchard, senior advisor to the director at the Office for Human Research Protections (OHRP). On the final morning of the conference, they treated us to a highly original and entertaining ditty set to the tune of the Beach Boys’ hit “Surfin’ USA.” Their version? “Shoppin’ IRBs.” Very catchy!

I also want to say that the Pillars of PRIM&R Lecture delivered by Elyn Saks, author of the astounding memoir, The Center Cannot Hold: My Journey Through Madness, was brave, powerful, and inspiring. Professor Saks is Orrin B. Evans Professor of Law, Psychology, and Psychiatry and the Behavioral Sciences at USC. She is also a person with schizophrenia. Read her book!

In terms of didactics, the highlight for me on Thursday was Tribal Participatory Research: Unique Aspects of Working in American Indian and Alaska Native Contexts. The scheduled presenters were unable to attend because they’re off in DC meeting with the President, but two very experienced experts pinch hit: Bill Freeman, director of tribal community health programs and human protections administrator at Northwest Indian College, and Deborah Morton, institutional review board (IRB) chair of the Southern California Tribal Health Clinic at the Indian Health Council.

Dr. Freeman opened the session by sharing a presentation from the National Congress of American Indian’s Puneet Sahota. (These slides and other materials are available online for conference attendees on the 2012 AER Conference Passport.) Dr. Sahota’s slides provided a helpful introduction to challenges for university IRBs in the review of research with tribal communities and community-based participatory research (CBPR) more generally. The session highlighted the need to protect whole communities in addition to individual study participants; jurisdictional issues that may arise in work with tribal IRBs; questions about when research begins; and how the IRB ought to regard community members who are also co-researchers.

Dr. Morton shared insights from the front lines of research oversight in a clinic that serves American Indian people in Southern California. She described how establishing a Tribal IRB fits into Dickert & Sugarman’s framework outlining four ethical goals of community consultation in research:
  1. Minimize risk beyond the original vision of the researcher (who may or may not have community experience); 
  2. Enhances the direct and indirect benefits of the healthcare delivery structure through return of data and results and empowerment of the community to decide what research will be done, how it will be done, and how much will be done; 
  3. Assures ethical and political legitimacy through the meaningful involvement of community members in decision making. And finally; and 
  4. Contributes to a sense of shared responsibility.
I’m running out of space, but I want to share Dr. Freeman’s review of what communities and Tribes want from research:
  • Protect and benefit the community
  • Respect elders and knowledge of the community 
  • Respect community’s strengths and survival 
  • Incorporate traditional spirituality into the project 
  • Promote resilience, assist community in its activation and problem finding, addressing, solving 
  • Have pride in the community’s role in the CBPR project 
  • Have ownership in/of the project 
  • Respect/promote tribal sovereignty and community power 
  • Express hope for the community’s future 
I think it’s really important for IRBs that review Tribal research to be aware of these desires.

Saturday, December 8, 2012

2012 Advancing Ethical Research Conference: Beyond the sessions

by Jackie Tekiela, MS, CIP, Institutional Review Board (IRB) Administrator at Wheaton Franciscan Healthcare
 
I really enjoyed the presentations, sessions, and events that I attended at the 2012 Advancing Ethical Research (AER) Conference. As expected, there was great variety and quality throughout all of the offerings. I hope that you took advantage of them as well! Here are a few of my favorites:

Blog Squad: Too obvious? Writing for Ampersand has been a fun, rewarding part of my conference experience and the red Blog Squad t-shirt was an added bonus. As I’m sure was the intent, it makes members of the PRIM&R Blog Squad much more visible, and I’ve met so many amazing people who’ve recognized the shirt. Think the Blog Squad is a neat idea? Want to be a member of team? Be on the lookout for the Blog Squad application for the 2013 AER Conference, and you too can join the fun!

Morning entertainment: As I was walking from the hotel the first morning of the conference, singing “Call Me Maybe” in my head, I wondered about what the Welcome Session would hold. For those of you not attending, the Welcome Session on Tuesday, December 4, included a parody of Carly Rae Jepsen’s “Call Me Maybe” by Ivor Pritchard, PhD, and Michele Russell-Einhorn, JD. It was great. I think attendees appreciate all the hard work that the Conference Co-Chairs invest in starting the day with some fun, light-hearted entertainment

Lunch: It's nice to take a break after the morning sessions and enjoy the beautiful San Diego weather. It’s also a great opportunity to process what you’ve heard, chat with some new friends, or make a few more! Lunch time at AER offers the opportunity for some "special guest" meals, and demonstrations such as PRIM&R's Online Course and Knowledge Center. You can also visit the conference Bookstore, peruse the posters, browse the Job Board, or check out the Tuskegee commemoration exhibit, “The Greater Good: An Artist’s Contemporary View of the Tuskegee Syphilis Experiment.”

Affinity groups: During the conference, PRIM&R organizes affinity Groups (small groups of people who share areas of specialized professional interest), which are designed to foster networking and community building before, during, and after the conference. Seven interest groups were represented at Affinity Group events: Global Research, Institutional Officials (IOs), IRB Chairs, Quality Assurance/Quality Improvement (QA/QI), Small Institutions, Social, Behavioral, and Educational Research (SBER), and Unaffiliated/Community Members. Being from a small institution, I found the Affinity Groups program a helpful way to network and share ideas with others who have similar concerns to mine.

If you haven’t been able to take advantage of these “extras,” don’t worry—there’s always next year! Stay informed about the programs, events, and special opportunities that will be offered at the 2013 AER Conference by visiting PRIM&R’s website.

Vantage point

by Julie Fine, BS, legal specialist, Legal Division, Worldwide Research and Development, Pfizer Inc. (Please note: The views presented here are my own and do not reflect the positions or policies of Pfizer Inc.)

On Thursday, I attended Challenges in Securing Coverage for Research-Related Injury, a session which was presented by my colleague, Marc D. Francis, JD, Pfizer Legal, and Karen E. Moe, PhD, University of Washington. Marc began by comparing the challenges in research injury coverage to a plot device used in the movie Vantage Point, in which a single event (an attempted presidential assassination) is portrayed from the perspective of multiple witnesses. Recollections of the event varied greatly depending on each individual’s point of view and proximity to the action. Marc likened the different perspectives to the views and biases of pharmaceutical sponsors, academic institutions, and, most importantly, the subjects who consent to participate in research injury coverage studies.

By definition, subject injury is an adverse clinical event experienced by a subject as the direct result of participation in a clinical trial. Subject injury language in clinical trial agreements allocates responsibility for coverage of injuries incurred by subjects outside of common law tort actions. It does not cover pre-existing conditions, natural disease progression, injuries from regularly scheduled care, or negligence of physician or clinical staff.

Currently, U.S. law does not require sponsors to cover medical expenses for subject injuries or to carry liability or other insurance. According to the Common Rule (42 CFR § 46.116(a)(6)) and Food and Drug Administration (FDA) human subjects protection requirements (21 CFR § 50.25(a)(6)), informed consent must include details on whether compensation or medical treatment is available if injury occurs, but compensation and/or medical treatment is not required.

Alternatively, guidance from some internationally respected sources instructs research injury practices somewhat more specifically. For example, ICH E6 Good Clinical Practice states that the “investigator/clinical site should ensure that adequate medical care is provided to a subject for any adverse events during and after participation in clinical trial (Section 4.3.2) and sponsor policies should address the costs of treatment of subjects in the event of subject injuries in accordance with applicable regulatory requirements (Section 5.8.2).” The Association for the Accreditation of Human Research Protection Programs (AAHRPP) also requires a written agreement with the sponsor that addresses medical care for research participants with a research-related injury, when appropriate (Domain I (Organization) and Element I.8.A). Additionally, the American Medical Association Ethical Opinion E08.0315 includes a provision that “physicians-researchers must ensure that protocols include provisions for funding subject medical care in the event of complications.”

From an international perspective, the United Kingdom, South Africa, Australia, New Zealand, and Singapore have all adopted compensation systems modeled on the Association of the British Pharmaceutical Industry (ABPI) guidelines. Brazil has adopted a holistic approach where compensable injuries include the “possibility of injury to the physical, psychic, moral, intellectual, social, cultural, or spiritual dimensions of the human subject.” In response to recent controversy, the Indian Council of Medical Research has drafted very broad guidelines which include regulations involving failure of an investigational product to provide intended therapeutic effect, administration of placebo providing no therapeutic benefits, and adverse effects due to concomitant medications.

From a best practices perspective, clinical trial agreements and informed consent documents ought to address subject injury provisions as well as what will happen if a subject is injured relative to the research. Two exemplary templates are:
  1. The proposed Institute of Medicine template
  2. The National Cancer Institute/CEO Roundtable Harmonized Clauses template
Marc also recommended:
Karen Moe described University of Washington’s compensation program—Human Subjects Assistance Program (HSAP)—which has been in place for 30 years. HSAP is a no-fault program “developed to provide medical and other assistance to subjects who experience a research-related medical problem that is likely caused by University-conducted research.” The program does not apply to industry-funded-and-initiated-research or to non-healthy volunteers in Phase 0-1 studies, and is limited to $10,000 for out of pocket expenses and to $250,000 write-off of care provided in a University of Washington hospital. None of this compensation comes from indirect fees. The HSAP program is not without some challenges, among them being:
  1. Medicare Secondary payer rule (won’t pay if there is a promise another source will) 
  2. Complex billing scenarios (who pays and how it is administered) 
  3. Consistency (between contract terms, budget, and consent document) 
  4. MMSEA (avoidance of dual payment) 
  5. Research Burden (administering two and three)
Karen referenced the excellent presentation by Kenneth Feinberg at a meeting of the Presidential Commission on the Study of Bioethical Issues about research injury compensation.

I hope I have time to read some of these materials! And I’ll have to rent Vantage Point on Netflix when I get home.

Me-time: The best tool for professional development

 by Royell Sullivan, Institutional Review Board (IRB) Education Specialist at the New York University (NYU) School of Medicine

On Wednesday, I attended session D5, How to Grow Your IRB Career: Professional Development and Networking, presented by Charlotte Coley, MACT, CIP; Karen Hansen; and Yvonne Higgins, CIP. I was really excited for this presentation because as a newbie in the world of human research protections, I am looking to discover what I will need to do to advance within this field. Deciding that you would like to pursue a career in human research protections entails knowing your environment, knowing what it will offer you, and identifying potential areas of growth. The session offered helpful tips for the beginner to keep in mind. I work as an IRB education specialist, and while I am interested in growing my own career, I also want to know how I can help my colleagues do the same. The speakers encouraged us to read constantly, take advantage of internal training, and check job listings frequently. I agree with these concepts but my staff often argues that there just is not enough time. Today, I asked for a solution to this. The response I was given was simple: MAKE TIME!

As an administrator, your day-to-day tasks and turn-around time are important. But not taking the opportunity to enhance your knowledge of all things IRB-related will hinder your professional growth. Make professional advancement a priority by taking full advantage of the resources you have. Take every opportunity to network both within and outside of your institution. Master the easy tasks of your job and take the time to improve in areas that you find difficult instead of avoiding them. Doing so will help to transform your job into a career.

To acquire responsibilities that you have never had, you will have to do things that you have never done. You may have to do what makes you nervous or uncomfortable in order to yield positive results. You must clarify your goals and communicate them to your institution’s leadership. Explain to them how helping you grow can benefit the institution at large. You may find that they nurture your goals and motivate you to keep pushing forward. If you have decided that you want a career instead of just a job and you do not see opportunity for growth where you are at the present time, move on.

If you really want a career in the field, come to terms with the fact that what you are doing from 9 to 5 each day is not going to get you where you want to go. I don’t know about you, but I find dead ends to be quite scary. Take the time to keep your paths open and boundless.

Friday, December 7, 2012

Autonomy: Essential bioethical principle…or illusion?

by Andrea Johnson, JD, CIP, regulatory specialist in the Research Integrity Office at Oregon Health and Science University 

Respect for persons is one of the three tenets of human research subject protections advanced by The Belmont Report. It includes two moral requirements:
  1. Autonomy of individuals must be acknowledged; and 
  2. Persons with diminished autonomy must be protected. 
The Belmont Report states, “To respect autonomy is to give weight to autonomous persons' considered opinions and choices while refraining from obstructing their actions unless they are clearly detrimental to others.”

When most of us (including me) visualize the ideal informed consent process for a research study, it ends with the potential subject making a decision to participate or not based wholly on his or her individual assessment of an unbiased and comprehensive collection of facts about the research. Wednesday's sessions, however, challenged this notion with additional considerations surrounding the concept of autonomy.

Jennifer Bell, MA, presented a poster during Panel III titled, "'No Man is an Island’: Cancer Patients' Experience of Autonomy Related to Their Decision-Making Process about Clinical Trial Participation." The title says it all. There is more to a potential subject’s decision than his or her own personal interests. Bell described this broader view of autonomy as “relational autonomy.” It is a combination of personal, social, and situational factors. For example, someone may wish to participate in a study, but knows that it would require a family member to drive him or her to study visits. The desire to avoid inconveniencing a family member may prevent the person from participating. Is that family member obstructing the potential subject’s actions? Are such external factors a source of coercion or undue influence? Does this mean that autonomy is an illusion? No, I don’t think autonomy is an illusion.

I do think, though, that our highly individualistic society sometimes fails to recognize the importance of the social and situational factors that influence personal actions. These factors are not necessarily a hindrance to our autonomy; rather, they are contributors to our perspective and sculptors of our values. In some sense, they make each of us the person that we are.

These ideas made me think back to the Keynote Address given by James R. Gavin III, MD, PhD. In his discussion of the barriers to racial and ethnic minorities’ participation in research studies, he identified several ways in which cultural and community misconceptions can deter individuals from becoming subjects. Again, we see external factors playing a role in a person’s decision. When external factors include wrong or missing information, however, there is most certainly a negative effect on autonomy.

Based on these perspectives, I see broadening our concept of autonomy as a way to facilitate greater respect for persons rather than less. Becoming aware of the external factors that influence a potential subject’s decision to participate in research can help us identify and work to correct social misconceptions. Where such external factors are not misconceptions, we can show respect for potential subjects by allowing and encouraging them to seek input from other sources before making their decisions.

Rehabilitating problem-child investigators: An innovative approach

by Susan Trinidad, MA, Research Scientist in the Department of Bioethics & Humanities at the Center for Genomics & Healthcare Equality at the University of Washington

We all have “problem children.” They’ve all been through ethics training – or they’re supposed to have done so – and yet they do things they shouldn’t, sometimes repeatedly. Why do these “bad apples” do what they do, and how can we get them to “knock it off?”

I have taught ethics courses for medical students and nursing students, and I’ve facilitated many group discussions with National Institutes of Health (NIH) trainees as part of my institution’s mandatory responsible conduct of research (RCR) program. Very often the response is, “All those bad things were done by Nazis, and I’m definitely not a Nazi, so why do I have to be here?”

James DuBois, DSc, PhD, professor at St. Louis University, shared some great insights and, better yet, a possible solution in session C22, titled Understanding and Responding to Wrongdoing in Research. We started out with a lively discussion about why wrongdoing occurs, generating quite a long list of reasons. While some experts have posited that professional pressures are largely to blame, Dr. DuBois sees this as only part of the picture. Pressures around funding, tenure, and publication are analogous to the role oxygen plays in making a fire. The pressures are not the spark, or we’d all be on fire all the time!

Why attempt rehabilitation? Well, for one thing, available data suggest that people who misbehave in this way are at high risk of doing it again. In a study in AJOB Primary Research, DuBois and colleagues report that out of 100 published cases of misconduct, 81% repeated wrongdoing; and 19% went on to commit wrongdoing in different workplaces.

So, ok: rehabilitation makes sense on those grounds. My own sense is that, given that training/mentoring in management, administration, and communication are pretty much nonexistent within the academy (and are sometimes actively selected against!), it’s only fair to start out by giving wrongdoers the benefit of the doubt.

Perhaps surprisingly, however, it turns out that the standard approach – programs aimed at ethics education and formal training in responsible conduct of research like the ones I’ve been involved with – don’t change behavior. One study showed that such training actually correlated with worse behavior!

Dr. DuBois presented a new program, RePAIR (Restoring Professionalism & Integrity in Research), aimed at teaching new habits and ways of thinking to researchers who have acted unethically. This innovative program is a three-day, in-person, small group intervention that is directed toward problem solving and skill development, rather than either informational content or punishment. It’s launching in January.

On Day 1, the focus will be on core values and obstacles to integrity, with guided self-reflection to help researchers remember why they’re doing this work and consider what has shaped their professional development so far. The Day 2 agenda focuses on ethical problem solving, and on day three, program staff help participants create a professional development plan. Follow-up calls will occur at two weeks, six weeks, and three months post intervention to support participants’ adoption of new behaviors and to assess program outcomes.

I wish the team luck with this novel approach, and I’ll be staying tuned to hear how it goes!

Genetics or society? The precarious position of race in clinical trials and medicine

by Rebecca Boxhorn, JD, Research Associate at the Consortium on Law and Values in Health, Environment & the Life Sciences at the University of Minnesota

The second day of the 2012 Advancing Ethical Research (AER) Conference got off to an engaging start thanks to the Keynote Address given by Dr. James R. Gavin III, executive vice president and chief medical officer of Healing Our Village, and clinical professor of medicine at Emory University School of Medicine. Dr. Gavin examined health disparities among minority communities and their relationship to clinical trials. He also demonstrated the inadequate safety and efficacy research of clinical interventions for minority populations and advocated for increased minority recruitment in clinical studies. Dr. Gavin suggested that increased involvement in clinical trials by minorities could lead to more positive medical interventions in communities who bear a disproportionate burden from chronic conditions such as diabetes and heart disease.

Human subjects research has an infamous history in the Black community. From the ethically indefensible acts at Tuskegee to the more nuanced concerns that emerged from Henrietta Lacks’ HeLa cells, the actions of biomedical researchers have fostered mistrust in the Black community. Although Dr. Gavin recognized minority communities’ “justified reluctance” to participate in clinical research, his Healing Our Village organization aims to increase minority engagement in clinical research. In order to achieve diversity in recruitment for clinical trials, Healing Our Village has highlighted the importance of cultural competence, honoring and respecting the beliefs, language, and behaviors of clinical trial participants. Increasing minority populations, Dr. Gavin argued, is required to improve the health status of medically underserved populations.

The inclusion of racial criteria in medicine is not without controversy, however. In 2005, the Food and Drug Administration approved BiDil, the “first treatment specifically for African Americans with heart failure.” The approval of BiDil set off a flurry of debate about the inclusion of self-identified racial categories in medicine. Professor Dorothy E. Roberts has been a leading critic of the basis of BiDil’s race-based usage. In a 2011 piece, Roberts argues that race-specific medicine is scientifically flawed, commercially motivated, and politically dangerous. Roberts also argues that race is a political, rather than biological, grouping. Health disparities, therefore, result “primarily [from] social inequality” rather than genetic or biological differences. The “biological definition of race” in medicine, she argues, is a false cure for health disparities and “threatens to make health and other social inequalities even worse.” Although Dr. Gavin alluded to the genetic heterogeneity of the African American population, the emphasis on recruiting self-identified minority populations in clinical trials would appear to support a biological understanding of race. Biological difference, however, cannot explain away health disparities altogether. As Professor Roberts notes, several studies undercut the correlation between genetics and the comparatively poor health status of minorities in the United States.

Although Dr. Gavin did not address these controversies, they are as present in clinical trials as in race-based pharmaceutical marketing. Both Dr. Gavin and Professor Roberts seek to reduce health disparities in minority populations. Their approaches to this unifying goal, however, may depend on incompatible definitions of race.

The emperor has no clothes

by Julie Fine, BS, legal specialist, Legal Division, Worldwide Research and Development, Pfizer Inc. (Please note: The views presented here are my own and do not reflect the positions or policies of Pfizer, Inc.)

During lunch on Tuesday, I took advantage of the opportunity to join other conference attendees for an intimate and informal question and answer session with Keynote Speaker, John P. A. Ioannidis, MD, PhD, who was as candid and provocative in this setting as he was earlier that morning during his talk. In his remarks during the Keynote Address, Dr. Ioannidis talked about bias in research reporting and faulty results, exposing an uncomfortable truth. During the lunch, Dr. Ioannidis entertained several questions from the group, but the following four were particularly notable for their challenging nature and his unflinching responses.

How would “Cloud” publications work and how would journals get to be “first” in this model? 
In the Keynote Address, one proposal for enhancing the transparency of reported results was a collective repository of publications, essentially “Cloud” publications. It would make published results accessible, searchable by interests, and available to journals. Dr. Ioannidis quipped that the journals would be able to make offers to authors (e.g., $500K from one journal, $750K from another, $1M from yet another) and authors could decide which offer to accept (the $1M, of course).

On a serious note, though, he commented that if journals were more apt to publish negative results it would be a valuable service and it might eliminate costly research into non-effective therapies and perhaps save lives. I imagine another positive outcome would be that researchers could redirect their efforts into more promising areas and redesign protocols based on findings from previous trials.

What are your thoughts on the registration of observational data? 
Dr. Ioannidis suggested that, historically, researchers publish their work in the field, one outcome at a time, narrowly showing what’s been tested. In what he described as his somewhat utopian view, all observational data would be registered and categorized into different levels. One level might be registration of data sets – what it is, what was published, how it was analyzed – thus, researchers would “show their cards.” A second level might involve registration of protocols in a highly pre-specified way, showing the exact protocol and outcome, and indicating if it was a prospective study.

What more can institutional review boards (IRBs) do? 
Dr. Ioannidis believes that IRBs should insist on registration in support of retrospective review. He suggests a systematic process for evaluating whether or not an investigator’s study is “worth it.” This evaluation would include questions such as:
  1. What research has been conducted previously?
  2. How is this study unique?
  3. Has the researcher’s prior work (for example, a randomized trial completed two years ago) been published? And if not, should additional research be approved? 
What can be done about the whole medical enterprise? 
Dr. Ioannidis recommended that medical training ought to offer more training to enable physicians to better perform and conduct research. He suggested instruction on optimal research design and methods (which would help tackle the issue of bias), understanding risk, and appropriate interpretation of results. In her remarks at the end of the session, Joan Rachlin, JD, MPH, executive director of PRIM&R, complimented Dr. Ioannidis for having the courage to be like the boy in the children’s story who boldly called out the emperor who had no clothes – reminding us not to be fooled or go along blindly, but rather to look clearly, deeply, and honestly at the research enterprise

Thursday, December 6, 2012

"I am an Army of one": How can short staffed IRB administrations make IRB education possible?

by Royell Sullivan, Institutional Review Board (IRB) Education Specialist at the New York University (NYU) School of Medicine

On Tuesday, I attended the session B25, titled Developing and Implementing an Education Program at an Institution with a Small Research Program. Many of the attendees expressed that while they saw the relevance behind implementing an actual educational program, they simply lacked the resources and support to do so.

Quite often, small institutions have a limited number of IRB staff members. A co-presenter of this session, Eric Allen, MA, CIP, CPIA, shared his IRB experience when he worked as “an army of one;” this was a familiar concept among the session’s attendees. Administrators place so much of their energy into administrative tasks that that there is no time or support for educational program development. Small institutions may only have the resources to distribute reading materials and ensure completion of CITI training. Sometimes, this just is not enough to meet the educational needs of the research community.

Michelle Feige of the Office for Human Research Protections' (OHRP) Division of Education indicated that while OHRP encourages education of investigators, staff, and administration, it is not a requirement. This can make it difficult to obtain the support of your institutional official when trying to implement an educational program that focuses on IRB regulations.

I feel as though it is important to reach out to your institutional official and provide him/her with the reasons why IRB education would be beneficial for all parties involved. Educating investigators about the requirements of the IRB can make the application process easier for them, improves the quality of submissions and encourage compliance for the duration of the research. Educating the staff and board members about the regulations and how to apply them may reduce confusion, facilitate quality reviews, and improve turn-around-time. Educating the institutional official about the importance of continuous compliance provides him/her with incentive to support your educational efforts.

All parties need to understand how and why they would directly benefit from IRB education in order to make it possible and successful. I advocate premeditated education versus reactive intervention. I believe in educating before things go wrong. Reach out to your institutional official for support before a serious non-compliance issue rears its ugly head and leaves multiple parties to suffer the consequences.

The more we know, the more we know how little we know...

by Andrea Johnson, JD, CIP, regulatory specialist in the Research Integrity Office at Oregon Health and Science University 

John P. A. Ioannidis, MD, PhD, the 2012 Advancing Ethical Research Conference keynote speaker on Tuesday, raised provocative questions about the information we obtain through scientific research. Among his many achievements, Dr. Ioannidis is well known for a 2005 paper published in PLOS Medicine titled, “Why Most Published Research Findings are False.” In his presentation today, he elaborated on this proposition, describing compelling evidence that much of the scientific literature that we have come to know and rely upon is infiltrated with numerous sources of bias.

At first, I reacted with alarm. If everything is biased, or even “false,” how can we trust that we know anything? Is this research, for the sake of which we expose so many human subjects to varying degrees of risk, really worth it?

Aside from being intriguing philosophical considerations, these questions carry practical implications for institutional review board (IRB) review of research. An IRB that assesses the risks and benefits of research must consider the likely value of the knowledge to be gained through the research. The IRBs that I have worked with have considered the quality of a study’s scientific design, looking out for obvious signs of bias or lack of control of variables, in appraising this likely value. However, IRB members have a limited amount of expertise on the research topics, and the board often relies on the information provided by the investigator regarding scientific background and justification for the study. Dr. Ioannidis made me wonder whether this system could ever be enough, not only because the investigator is likely to view the literature with his or her own bias, but also because the literature itself cannot be trusted.

But let’s take a reality check. Stepping back and looking at the big picture, it seems pretty indisputable that research has moved science and medicine forward. Furthermore, it is not practical for an IRB to conduct its own independent literature analysis for every project it reviews. Nor is that practice expected by our federal regulators, as written in OHRP’s Guidance on IRB Continuing Review of Research (November 2010), which states, “[N]ote that OHRP does not expect the IRB to perform an independent review of the relevant scientific literature related to a particular research project undergoing continuing review; this responsibility rests with the investigators and any monitoring entity for the research.”

As with many challenges in the protection of human research subjects, there is a balance to be struck here. To OHRP’s point, an IRB can review a study’s monitoring plan to ensure that it promotes more objective consideration of relevant literature. Additionally, Dr. Ioannidis advocated that both positive and negative research results should be published and accessible to the public.

These are both good ways to help mitigate the problem, but the session still left me feeling a bit skeptical and unsettled. I suppose that could be Dr. Ioannidis’s goal—cultivate a healthy dose of skepticism and discomfort that will keep us on our toes and continuously searching for ways to improve the review process.

What's comparative effectiveness research all about?

by Susan Trinidad, MA, Research Scientist in the Department of Bioethics & Humanities at the Center for Genomics & Healthcare Equality at the University of Washington

I sure hope somebody is reading this, since I am inside (near a power outlet) instead of enjoying the sunshine during our afternoon break! In this post, I’ll be sharing some of the highlights from didactic workshop A4, Comparative Effectiveness Research: What Bioethicists Need to Know.

This was an all-star panel: Walter Straus, MD, MPH, PRIM&R Board Member, and global director of scientific affairs for Merck’s vaccines division; Hugh Tilson, MD, DrPH, another Board member with faculty appointments at a minimum of two universities (and I think there were more!); Steven Teutsch, MD, MPH, chief science officer of the Los Angeles County Department of Public Health and co-editor of a new book, Public Health Practice: What Works; and Newell McElwee, PharmD, MSPH, executive director of outcomes research at Merck & Co.

Dr. Strauss provided a brief orientation to comparative effectiveness research (CER), which has become a hot topic since its inclusion in the Affordable Care Act and the establishment of the Patient-Centered Outcome Research Institute (PCORI). CER is aimed at closing the substantial knowledge gap in understanding what works and doesn’t work in health care, with the goal of enhancing patients’ decision-making in the clinical setting. As Dr. Strauss pointed out, with $1 billion set aside to fund CER over the next seven years, there’s going to be more of this on our radar. But these studies will likely look quite different from randomized control trials by design—and what will that mean for institutional review boards?

Dr. Tilson planted the key questions, where does CER fit in the grand scheme of similar activities?; how is it like (and unlike) quality improvement, public health surveillance, and other observational studies? and; at what point, and in what way, might ethics review or other human research protections program (HRPP) oversight come into play for CER? He unveiled a hot-off-the-presses copy of the “boundaries report,” Health-Related Activities along the Boundary between Research and Practice: When to Take Alternate Approaches to Providing Ethical Oversight, a new white paper produced as part of PRIM&R’s Boundaries Project, that addresses some of these questions.

Next up was Dr. Teutsch to explain what CER is, some of the history that led to its development, what its goals are, and some of the open questions about where it might take us over the next few years. He traced the development of CER in relation to other turning points, including John Wennberg’s work on geographic practice variation, the move toward evidence-based medicine, and Centers for Medicare and Medicaid Services (CMS) policies supporting coverage with evidence development. He also proposed an important distinction between evidence synthesis and evidence-based decision making, which you can read about here.

Dr. McElwee shared his insights about how CER and PCORI came to prominence, noting that the “patient-centered” emphasis may have been a calculated choice on the part of Congressional lawmakers who wanted to address health care quality, but knew that an industry-focused approach wouldn’t fly. Key milestones in the process were a 2006 paper by Gail Wilensky, former head of CMS, that laid out a blueprint for effectiveness research; a New England Journal of Medicine article by Peter Orszag, then with the Congressional Budget Office, which asserted that current healthcare costs were unsustainable; and the Institute of Medicine Report, Knowing What Works in Health Care.

Revising the consent document: NCI style

by Jackie Tekiela, MS, CIP, Institutional Review Board (IRB) Administrator at Wheaton Franciscan Healthcare

The National Cancer Institute (NCI) recently completed a two-year effort to revise the NCI informed consent template, resulting in shorter and more concise informed consent documents. In session A18, titled Rewriting the National Cancer Institute (NCI) Informed Consent Template, Jeanne M. Adler, RN, MPH, CCRP, presented the background, method, and rationale behind the changes. Throughout the presentation Jeanne offered specific examples of language and excerpts from the draft template.

In preparing the template, working groups started with a “blank page and the regulations” and developed a concise consent template. The groups consisted of patient advocates, institutional review board (IRB) chairs, central IRB (CIRB) chairs and members, clinical research associates (CRAs), investigators, nurses, and cooperative group regulatory and protocol development staff, and consulted with individuals from the U.S. Food and Drug Administration (FDA) and Office for Human Research Protections (OHRP).

Applying this consent template to three closed studies, they reduced the length of the consent documents from 16 to seven pages each.

New template features included:
  • Description of how a study differs from standard treatment 
  • Risk section formatted as tables for each drug or regimen/arm 
  • Clearer definition of risk frequency (“x out of one hundred,” rather than percentage) 
  • Risks described from study participant perspective 
  • Brief description of standard care to place research in context 
  • Section length limits • Doctor’s contact information in one single place 
  • Two study titles – both a lay title and technical title 
  • More text examples, covering different types and phases of studies, mandatory specimen collection, optional research biopsy, future studies consent, and optional specimen collection 
Other interesting points:
  • Suggested limiting word or page count, or the estimated reading time 
  • Emphasized the role of the informed consent document (ICD): to summarize and document the process, not provide detailed descriptions that should be part of the process 
  • Attachments should be informative and optional 
  • Forms should include risks of procedures only if part of research question, not if part of usual care 
  • Correlative trials should be embedded into ICD and concisely worded
Handouts for the session are available on the Conference Passport (your code to access the site was emailed to you and is on the back of your conference name badge).

The finalized template is expected to be released in January 2013 (effective date for new trials will be 6-8 weeks after that). It will be distributed to everyone in the Cancer Therapy Evaluation Program (CTEP) database, cancer centers, and partners. Once posted, anyone will be able to access it on the Investigator Resources page of CTEP’s website.
 
ban nha mat pho ha noi bán nhà mặt phố hà nội