肝胆相照论坛

标题: You don't want to read this if you are sick. [打印本页]

作者: StephenW    时间: 2011-3-30 16:50     标题: You don't want to read this if you are sick.

                                                                Health Care Myth Busters: Is There a High Degree of Scientific Certainty in Modern Medicine?                                                                                                               

Two doctors take on the health care system in a new book that aims to arm people with information

Editor's Note: The following is an excerpt from the new book
Demand Better! Revive Our Broken Health Care System (Second River Healthcare Press, March 2011) by Sanjaya Kumar, chief medical officer at Quantros, and David B. Nash, dean of the Jefferson School of Population Health at Thomas Jefferson University. In the following chapter they explore the striking dearth of data and persistent uncertainty that clinicians often face when having to make decisions.


Myth: There is a high degree of scientific certainty in modern medicine

"In America, there is no guarantee that any individual will receive high-quality care for any particular health problem. The healthcare industry is plagued with overutilization of  services, underutilization of services and errors in  healthcare practice." – Elizabeth A. McGlynn, PhD, Rand Corporation researcher, and colleagues. (Elizabeth A. McGlynn, PhD; Steven M. Asch, MD, MPH; et al. "The Quality of Healthcare Delivered to Adults in the United States," New England Journal of Medicine 2003;348:2635-2645.)


Most of us are confident that the quality of our healthcare is the finest, the most technologically sophisticated and the most scientifically advanced in the world. And for good reason—thousands of clinical research studies are published every year that indicate such findings. Hospitals advertise the latest, most dazzling techniques to peer into the human body and perform amazing lifesaving surgeries with the aid of high-tech devices. There is no question that modern medical practices are remarkable, often effective and occasionally miraculous.

But there is a wrinkle in our confidence. We believe that the vast majority of what physicians do is backed by solid science. Their diagnostic and treatment decisions must reflect the latest and best research. Their clinical judgment must certainly be well beyond any reasonable doubt. To seriously question these assumptions would seem jaundiced and cynical.

But we must question them because these beliefs are based more on faith than on facts for at least three reasons, each of which we will explore in detail in this section. Only a fraction of what physicians do is based on solid evidence from Grade-A randomized, controlled trials; the rest is based instead on weak or no evidence and on subjective judgment. When scientific consensus exists on which clinical practices work effectively, physicians only sporadically follow that evidence correctly.

Medical decision-making itself is fraught with inherent subjectivity, some of it necessary and beneficial to patients, and some of it flawed and potentially dangerous. For these reasons, millions of Americans receive medications and treatments that have no proven clinical benefit, and millions fail to get care that is proven to be effective. Quality and safety suffer, and waste flourishes.

We know, for example, that when a patient goes to his primary-care physician with a very common problem like lower back pain, the physician will deliver the right treatment with real clinical benefit about half of the time. Patients with the same health problem who go to different physicians will get wildly different treatments. Those physicians can't all be right.

Having limited clinical evidence for their decision-making is not the only gap in physicians' scientific certainty. Physician judgment—the "art" of medicine—inevitably comes into play, for better or for worse. Even physicians with the most advanced technical skills sometimes fail to achieve the highest quality outcomes for their patients.  That's when resourcefulness—trying different and potentially better interventions—can bend the quality curve even further.

And, even the most experienced physicians make errors in diagnosing patients because of cognitive biases inherent to human thinking processes. These subjective, "nonscientific" features of physician judgment work in parallel with the relative scarcity of strong scientific backing when physicians make decisions about how to care for their patients.

We could accurately say, "Half of what physicians do is wrong," or "Less than 20 percent of what physicians do has solid research to support it." Although these claims sound absurd, they are solidly supported by research that is largely agreed upon by experts. Yet these claims are rarely discussed publicly. It would be political suicide for our public leaders to admit these truths and risk being branded as reactionary or radical. Most Americans wouldn't believe them anyway. Dozens of stakeholders are continuously jockeying to promote their vested interests, making it difficult for anyone to summarize a complex and nuanced body of research in a way that cuts through the partisan fog and satisfies everyone's agendas. That, too, is part of the problem.

Questioning the unquestionable
The problem is that physicians don't know what they're doing. That is how David Eddy, MD, PhD, a healthcare economist and senior advisor for health policy and management for Kaiser Permanente, put the problem in a Business Week cover story about how much of healthcare delivery is not based on science. Plenty of proof backs up Eddy's glib-sounding remark.

The plain fact is that many clinical decisions made by physicians appear to be arbitrary, uncertain and variable. Reams of research point to the same finding: physicians looking at the same thing will disagree with each other, or even with themselves, from 10 percent to 50 percent of the time during virtually every aspect of the medical-care process—from taking a medical history to doing a physical examination, reading a laboratory test, performing a pathological diagnosis and recommending a treatment. Physician judgment is highly variable.

Here is what Eddy has found in his research. Give a group of cardiologists high-quality coronary angiograms (a type of radiograph or x-ray) of typical patients and they will disagree about the diagnosis for about half of the patients. They will disagree with themselves on two successive readings of the same angiograms up to one-third of the time. Ask a group of experts to estimate the effect of colon-cancer screening on colon-cancer mortality and answers will range from five percent to 95 percent.

Ask fifty cardiovascular surgeons to estimate the probabilities of various risks associated with xenografts (animal-tissue transplant) versus mechanical heart valves and you'll get answers to the same question ranging from zero percent to about 50 percent. (Ask about the 10-year probability of valve failure with xenografts and you'll get a range of three percent to 95 percent.)

Give surgeons a written description of a surgical problem, and half of the group will recommend surgery, while the other half will not. Survey them again two years later and as many as 40 percent of the same surgeons will disagree with their previous opinions and change their recommendations. Research studies back up all of these findings, according to Eddy.

Because physician judgment varies so widely, so do treatment decisions; the same patient can go to different physicians, be told different things and receive different care. When so many physicians have such different beliefs and are doing such different things, it is impossible for every physician to be correct.

Why are so many physicians making inaccurate decisions in their medical practices? It is not because physicians lack competence, sincerity or diligence, but because they must make decisions about tremendously complex problems with very little solid evidence available to back them up. (That situation is gradually changing with the explosion in medical literature. Recent surveys by the Healthcare Information and Management Systems Society (HIMSS) reveal that an increasing number of hospitals and healthcare organizations are adopting technologies to keep up with the flow of research, such as robust, computerized physician-order-entry (CPOE) systems to ensure appropriate drug prescribing.)

Most physicians practice in a virtually data-free environment, devoid of feedback on the correctness of their practice. They know very little about the quality and outcomes of their diagnosis and treatment decisions. And without data indicating that they should change what they're doing, physicians continue doing what they've been doing all along.

Physicians rely heavily on the "art" of medicine, practicing not according to solid research evidence, but rather by how they were trained, by the culture of their own practice environment and by their own experiences with their patients.

For example, consider deep-vein-thrombosis (DVT) prophylaxis, that means therapy to prevent dangerous blood clots in vessels before and after operations in the hospital. Research offers solid, Grade-A evidence about how to prevent DVT in the hospital. But only half of America's hospitals follow these practices. That begs an important question: Why? We have the science for that particular sliver of care. How come we still can't get it right?

The core problem we would like to examine here is that a disturbingly large chunk of medical practice is still "craft" rather than science. As we've noted, relatively little actionable science is available to guide physicians and physicians often ignore proven evidence-based guidelines when they do exist. A guild-like approach to medicine—where every physician does it his or her way—can create inherent complexity, waste, proneness to error and danger for patients.

A great example comes from Peter Pronovost, MD, PhD, a patient-safety expert and a professor of anesthesiology, critical-care medicine and surgery at the Johns Hopkins University School of Medicine. He is co-author of Safe Patients, Smart Hospitals: How One Doctor's Checklist Can Help Us Change Health Care from Inside Out. In a televised interview about his book, Pronovost said that we (that is, physicians) knew that we were killing people with preventable central-line blood-stream infections in hospitals and we accepted it as a routine part, albeit a toxic side-effect, of practice. We were killing more people that way, probably, than those who died of breast cancer. We tolerated it because our practices didn't use available scientific evidence that showed us how to prevent such infections. We ignored the science and patients paid the price with their lives.

Cost is another toxic by-product of care delivery practices that are not based on solid science and the tremendous clinical variation that results from them.




作者: StephenW    时间: 2011-3-30 16:50

Doing the right thing only half of the time
When we look at how well physicians are really doing, it's scary to see how off the mark they are. Anyone who feels self-assured about receiving the best medical care that science can offer is in for a shock, considering some eye-opening research that shows how misplaced that confidence is. Let's start with how well physicians do when they have available evidence to guide their practices.

The best answer comes from seminal research by the Rand Corporation, a respected research organization known for authoritative and unbiased analyses of complex topics. On average, Americans only receive about half of recommended medical care for common illnesses, according to research led by Elizabeth McGlynn, PhD, director of Rand's Center for Research on Quality in Health Care. That means the average American receives care that fails to meet professional evidence-based standards about half of the time.

McGlynn and her colleagues examined thousands of patient medical records from around the country for physician performance on 439 indicators of quality of care for thirty acute and chronic conditions as well as preventive care, making the Rand study one of the largest of its kind ever undertaken. The researchers examined medical conditions representing the leading causes of illness, death and healthcare service use across all age groups and types of patients. They reviewed national evidence-based practice guidelines that offer physicians specific and proven care processes for screening, diagnosis, treatment and follow-up care. Those guidelines were vetted by several multispecialty expert panels as scientifically grounded and clinically proven to improve patient care.

For example, when a patient walks into the doctor's office, the physician is supposed to ensure that when the patient shows up for hip surgery, he or she will receive drugs to prevent blood clots and then a preventive dose of antibiotics.

Even though clinical guidelines exist for practices like these, McGlynn and her colleagues found something shocking: physicians get it right about 55 percent of the time across all medical conditions. In other words, patients receive recommended care only about 55 percent of the time, on average. It doesn't matter whether that care is acute (to treat current illnesses), chronic (to treat and manage conditions that cause recurring illnesses, like diabetes and asthma) or preventive (to avert acute episodes like heart attack and stroke).

How well physicians did for any particular condition varied substantially, ranging from about 79 percent of recommended care delivered for early-stage cataracts to about 11 percent of recommended care for alcohol dependence. Physicians prescribe the recommended medication about 69 percent of the time, follow appropriate lab-testing recommendations about 62 percent of the time and follow appropriate surgical guidelines 57 percent of the time. Physicians adhere to recommended care guidelines 23 percent of the time for hip fracture, 25 percent of the time for atrial fibrillation, 39 percent for community-acquired pneumonia, 41 percent for urinary-tract infection and 45 percent for diabetes mellitus.

Underuse of recommended services was actually more common than overuse: about 46 percent of patients did not receive recommended care, while about 11 percent of participants received care that was not recommended and was potentially harmful.

Here is disturbing proof that physicians often fail to follow solid scientific evidence of what "quality care" is in providing common care that any of us might need:

• Only one-quarter of diabetes patients received essential blood-sugar tests.
• Patients with hypertension failed to receive one-third the recommended care.
• Coronary-artery-disease patients received only about two-thirds of the recommended care.
• Just under two-thirds of eligible heart-attack patients received aspirin, which is proven to reduce the risk of death and stroke.
• Only about two-thirds of elderly patients had received or been offered a pneumococcal vaccine (to help prevent them from developing pneumonia).
• Scarcely more than one-third of eligible patients had been screened for colorectal cancer.

These findings have shaped the conversation among experts on American healthcare quality by establishing a national baseline for the status quo. That baseline is jarring and disturbing. The gap between what is proven to work and what physicians actually do poses a serious threat to the health and well-being of all of us. That gap persists despite public- and private-sector initiatives to improve care. Physicians need either better access to existing information for clinical decision-making or stronger incentives to use that information.

Inappropriate use of medical services (both underuse and overuse) by physicians is rampant, affecting millions of patients. We know that because some of the nation's leading healthcare quality and safety experts reviewed several large-scale national studies and presented their findings to the President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry, which was formed during President Bill Clinton's administration.

The commission released a report in March, 1998, that stated: "Exhaustive research documents the fact that today, in America, there is no guarantee that any individual will receive high-quality care for any particular health problem. The healthcare industry is plagued with overutilization of services, underutilization of services and errors in healthcare practice." The central problem, as the Rand study had revealed, is clinicians' failure to follow evidence-based best practice guidelines that exist and have been proven to enhance the quality of healthcare delivery.

The commission's report acknowledged that physicians may have difficulty keeping up with an explosive growth in medical research, noting that the number of published randomized, controlled trials had increased from an average of 509 annually between 1975 and 1980 to 8,636 annually from 1993 through 1997. That's just for randomized, controlled trials. Several other types of studies considerably increase the number of annual research articles that physicians must keep up with to be current on scientific research findings. Plus, that was more than ten years ago; the numbers grow more rapidly each year.

From these data, the report concluded that a troubling gap exists between best practices and actual practices and that the likelihood that any particular patient will get the best care possible varies considerably. Translation: physicians aren't following the evidence.

Hospitals are on the hook as well and show wide gaps in their delivery of recommended care. The Leapfrog Group is a consortium of large employers that reports and compares hospital quality-performance data to help companies make healthcare purchasing decisions. (Full disclosure: Sanjaya Kumar is president and CEO of Quantros, the company that hosts the Leapfrog Group's Hospital Safety Survey.) The group tracks more than 1,200 U.S. hospitals that voluntarily report how well they adhere to a variety of evidence-based quality measures that are endorsed by the National Quality Forum (NQF) or are consistent with those of The Joint Commission and the Federal Centers for Medicare & Medicaid Services.

Results from the Leapfrog Group's 2009 hospital survey show that just over half of hospitals meet Leapfrog's quality standard for heart-bypass surgery; under half meet its standard for heart angioplasty; and under half of hospitals meet Leapfrog's quality standards for six common procedures, including high-risk surgery, heart-valve replacement and high-risk deliveries, even though nationally accepted scientific guidelines for these procedures exist and have been proven to save lives.

It's disturbingly clear from these studies that too many physicians and hospitals are not applying known, evidence-based and available guidelines for quality practice. Physicians are either ignoring or unaware of much better ways to treat their patients.

Knowing the right thing only one-fifth of the time
Failing to follow existing guidelines is only part of what makes so much of medical practice "unscientific." Another key reason is that there are so few solid, actionable scientific guidelines to begin with, and those that are available cover a relatively small slice of clinical care.

Part of the problem is that science, technology and culture are all moving targets. Today's dogma is tomorrow's folly, and vice versa. Many examples show that what physicians once accepted as truth has been totally debunked. Twenty years ago, for instance, physicians believed that lytic therapy for post-myocardial infarction would prolong a heart attack. The therapy involves clot-busting medication given to heart-attack patients. Today it is standard practice. Angioplasty and intracoronarylysis of clots are other examples. Years ago, surgery for benign prostatic hypertrophy (enlarged prostate) was one of the top DRGs (illnesses billed by hospitals) under Medicare. Today, we do far fewer of these procedures because of new drugs.

The public has little idea that physicians are playing a sophisticated guessing game every single day. That is a scary thought. We hope that one day we'll look back, for example, on cancer chemotherapy the same way we look back at the use of leeches, cupping and bloodletting.

Another part of the problem is that clinical knowledge generated by randomized, controlled trials takes far longer to reach the front lines of medical care than most people realize. Turning basic scientific discoveries into innovative therapies—from "laboratory bench to bedside"—takes up to 17 years. Existing scientific literature is being added to and undergoing overhaul every two years, which adds to the knowledge gap at the bedside.

Time lag notwithstanding, thousands of research articles are published every year, which presents a different challenge to delivering care based on the strongest evidence. Physicians can't always keep up with the volume of knowledge to be reviewed and put into practice, and those who don't provide poorer quality care. Medical advances occur frequently, and detailed knowledge quickly goes out of date.

Here's a counterintuitive consequence: the more years of practice experience a physician has, the more out-of-date his or her practice patterns may be. Research has documented this phenomenon of decreasing quality of clinical performance with increasing years in practice. Although we generally assume that the knowledge and skills that physicians accumulate during years of practice lead to superior clinical abilities, those physicians may paradoxically be less likely to provide what the latest scientific evidence says is appropriate care! It's all about the evidence and keeping up with it.

But just how comprehensive is the available scientific evidence for effective clinical practices? It is slimmer than most people think. Slice a pie into five pieces, and remove one piece. That slice represents the roughly 20 percent of clinical-care practices for which solid randomized, controlled trial evidence exists. The remaining four-fifths represent medical care delivered based upon a combination of less reliable studies, unsystematic observation, informed guesswork and conformity to prevailing treatments and procedures used by most other clinicians in a local community.

To illustrate how little scientific evidence often exists to justify well-established medical treatments, David Eddy researched the scientific evidence underlying a standard and widely used glaucoma treatment designed to lower pressure in the eyeball. He searched published medical reports back to 1906 and could not find one randomized, controlled trial of the treatment. That was despite decade after decade of confident statements about it in textbooks and medical journals, statements which Eddy found had simply been handed down from generation to generation. The kicker was that the treatment was harmful to patients, actually causing more cases of blindness rather than fewer.

Similar evidence deficits exist for other common medical practices, including colorectal screening with regular fecal-occult-blood tests and sigmoidoscopy; annual chest x-rays; surgery for enlarged prostates; bone-marrow transplants for breast cancer; and common approaches to pain control, depression, immunizations, cancer screening, alcohol and drug abuse, smoking and functional disabilities. The problem is rampant across medicine; a huge amount of what physicians do lacks a solid base of scientific evidence.

In the past, many standard and accepted practices for clinical problems were simpler and more straightforward than those that today's clinicians face—and these practices seem to have worked, despite the paucity of good research evidence. Physicians simply made subjective, intuitive decisions about what worked based on what they observed. The problem today is that the growing complexity of medicine bombards clinicians with a chaotic array of clinical choices, ambiguities and uncertainties that exceeds the inherent limitations of the unaided human mind. As a result, many of today's standard clinical practices bear no relation to any evidence of effectiveness.

作者: StephenW    时间: 2011-3-30 16:52

Instead, physicians frequently base their decisions on shortcuts, such as the actions of the average practitioner ("if everyone is doing it, the intervention must be appropriate"); the commonness of the disease ("if the disease is common, we have no choice but to use whatever treatment is available"); the seriousness of the outcome ("if the outcome without treatment is very bad, we have to assume the treatment will work"); the need to do something ("this intervention is all we have"); and the novelty or technical appeal of the intervention ("if the machine takes a pretty picture, it must have some use").

Drug prescribing is another blatant example of medical practice that is often evidence-free. Drugs that are known to be effective may work well for only 60 percent of people who take them. But about 21 percent of drug prescriptions in the United States are for "off-label" use, that is, to treat conditions for which they have not been approved by the U.S. Food and Drug Administration. That's more than 150 million prescriptions per year. Off-label use is most common among cardiac medications (46 percent) and anticonvulsants (46 percent). Here's the real punch line: in 73 percent of the cases where drugs are used in unapproved ways, there is little or no evidence that they work. Physicians prescribe drugs well over a million times a year with little or no scientific support.

These are fighting words, saying that such a big chunk of medical practice is not based on science. To illustrate just how provocative this topic is, look at what happened in the 1990s when the Federal Agency for Health Care Policy and Research (now the Agency for Healthcare Research and Quality) released findings from a five-year investigation of the effectiveness of various treatments for low back pain—one of the leading reasons that Americans see physicians.

Between 1989 and 1994, an interdisciplinary Back Pain Patient Outcomes Assessment Team (BOAT) at the University of Washington Medical School in Seattle set out to determine what treatment strategies work best and for whom. Led by back expert Richard A. Deyo, MD, MPH, the team included orthopedic surgeons, primary-care physicians, physical therapists, epidemiologists and economists. Together, they examined the relative value of various diagnostic tests and surgical procedures.

They conducted a comprehensive review of clinical literature on back pain. They exhaustively examined variations in the rates at which different procedures were being used to diagnose and treat back pain. Their chief finding was deeply disturbing: what physicians thought worked well for treating low back pain doesn't. The implication was that a great many standard interventions for low back pain may not be justified. And that was immensely threatening to physicians, especially surgeons who perform back operations for a living.

Among the researchers' specific findings: no evidence shows that spinal-fusion surgery is superior to other surgical procedures for common spine problems, and such surgery leads to more complications, longer hospital stays and higher hospital charges than other types of back surgery.

Disgruntled orthopedic surgeons and neurosurgeons reacted vigorously to the researchers' conclusion that not enough scientific evidence exists to support commonly performed back operations. The surgeons joined with Congressional critics of the Clinton health plan to attack federal funding for such research and for the agency that sponsored it. Consequently, the Agency for Healthcare Policy and Research had its budget for evaluative research slashed drastically.

The back panel's guidelines were published in 1994. Since then, even though there are still no rigorous, independently funded clinical trials showing that back surgery is superior to less invasive treatments, surgeons continue to perform a great many spinal fusions. The number increased from about100,000 in 1997 to 303,000 in 2006.

What are physicians to do? They need a great deal more reliable information than they have, especially when offering patients life-changing treatment options. Before recommending surgery or radiation treatment for prostate cancer, for example, physicians and their patients must compare the benefits, harms and costs of the two treatments and decide which is the more desirable.

One treatment might deliver a higher probability of survival but also have bad side effects and high costs, while the alternative treatment might deliver a lower probability of survival but have no side effects and lower costs. Without valid scientific evidence about those factors, the patient may receive unnecessary and ineffective care, or fail to receive effective care, because neither he nor his physician can reliably weigh the benefits, potential harm and costs of the decision.

Recognizing that the quality and reliability of clinical-research information vary greatly, entities like the U.S. Preventive Services Task Force (USPSTF) have devised rating systems to rank the strength of available evidence for certain treatments. The strongest evidence is the scarcest and comes from systematic review of studies (randomized, controlled trials) that are rigorously designed to factor out biases and extraneous influences on results. Weaker evidence comes from less rigorously designed studies that may let bias creep into the results (for example, trials without randomization or cohort or case-control analytic studies). The weakest evidence comes from anecdotal case reports or expert opinion that is not grounded in careful testing.

Raymond Gibbons, MD, a professor of medicine at the Mayo Clinic and past president of the American Heart Association, puts it well: "In simple terms, Class I recommendations are the 'do's'; Class III recommendations are the 'don'ts'; and Class II recommendations are the 'maybes.'" The point is this: even physicians who follow guidelines must deal with scientific uncertainty. There are a lot more "maybes" than "do's."

Even the "do's" require value judgments, and it is important to be clear about what evidence-based practice guidelines can and cannot do, regardless of the strength of their scientific evidence. Guidelines are not rigid mandates or "cookie-cutter" recommendations that tell physicians what to do. They are intended to be flexible tools to help physicians and their patients make informed decisions about their care.

Even guidelines that are rooted in randomized, controlled trial research do not make clinical decisions for physicians; rather, they must be applied to individual patients and clinical situations based on value judgments, both by physicians and their patients. Clinical decision-making must entail value judgments about the costs and benefits of available treatments. What strong guidelines do is to change the anchor point for the decision from beliefs about what works to evidence of what works. Actual value-based treatment decisions are a necessary second step.

For example, should a physician recommend an implantable cardioverter-defibrillator (ICD) to his or her patient when a randomized-control trial shows that it works? The device is a small, battery-powered electrical-impulse generator implanted in patients at risk of sudden cardiac death due to ventricular fibrillation (uncoordinated contraction of heart chamber muscle) and ventricular tachycardia (fast heart rhythm). A published randomized trial compared ICDs to management with drugs for heart-attack patients and found that ICDs reduced patients' probability of death at 20 months by about one-third.

Armed with such a guideline, the physician and patient must still make a value judgment: whether the estimated decrease in chance of death is worth the uncertainty, risk and cost of the procedure. The ultimate decision is not in the guideline, but it is better informed than a decision made without the evidence to help guide it. The guideline has lessened uncertainty but not removed it.

The lesson here is that there are huge gaps in the scientific evidence guiding physician decision-making, and it wasn't until healthcare-quality gadflies like David Eddy began to demand to see the evidence that we learned about those gaps. This revelation has had at least two beneficial effects: it informs us about the lack of evidence so that we can be more realistic in our expectations and more aware of the uncertainty in medical decision-making, and it exhorts the medical community to search for better evidence.

"Nothing should be affirmatively promoted unless there is good evidence of at least some benefit," writes Eddy. It is simply amazing that applying such a statement to modern medicine represents such a ground-breaking development. But it has literally changed the face of medicine.

Excerpted from Demand Better! Revive Our Broken Healthcare System by Sanjaya Kumar and David B. Nash. Copyright © 2011 by Sanjaya Kumar and David B. Nash. Excerpted with permission by Second River Healthcare Press.
作者: yolanda67    时间: 2011-3-30 17:50

too much too read,but still thanks
作者: StephenW    时间: 2011-3-30 17:56

yolanda67 发表于 2011-3-30 17:50
too much too read,but still thanks

It basically says your doctor is often wrong

作者: pilot2006    时间: 2011-3-31 23:54

StephenW 发表于 2011-3-30 04:56
It basically says your doctor is often wrong

Thank you for the summary!

作者: StephenW    时间: 2011-4-1 00:14

pilot2006 发表于 2011-3-31 23:54
Thank you for the summary!

From your icon, I see you like your pill

作者: pilot2006    时间: 2011-4-1 00:15

StephenW 发表于 2011-3-31 11:14
From your icon, I see you like your pill

No, I don't like any pills, I have to take it. Just finished one month treatment.

作者: StephenW    时间: 2011-4-1 00:23

pilot2006 发表于 2011-4-1 00:15
No, I don't like any pills, I have to take it. Just finished one month treatment.

Good luck. I hope to see that "pill" changed into a





欢迎光临 肝胆相照论坛 (http://hbvhbv.info/forum/) Powered by Discuz! X1.5