Local Anesthetics: The Caine Family

As seen in the previous post, local anesthetics generally work to prevent only a small area of the body from experiencing pain by inhibiting the flow of sodium ions (preventing action potential thus preventing nerve activity) through sodium channels embedded in the cell membrane of neurons. More specifically, the local anesthetic will bind to a receptor inside the sodium channels and antagonize it, therefore closing the sodium channels thus creating the halt in the influx of ions through the channels as seen in the diagram below.

 .

Many local anesthetics commonly bind to the N-methyl-D-aspartate (NMDA) receptor (an image of how the anesthetic might bind to a receptor through the polar attractions between the receptor and anesthetics is shown here), such as the constituents of the Caine family: a category of local anesthetic compounds that share similar qualities (i.e. similar receptors and mechanism of actions) and end in the suffix “caine”. The following will consist of descriptions of three different local anesthetics, particularly from the Caine family, to demonstrate the functional and molecular diversity in the compounds of local anesthesia.

Cocaine:

Cocaine, otherwise known as benzoylmethylecgonine, can be used as a type of local anesthetic, but for the past several decades it has reached the headlines for different reasons. Cocaine was used historically as an eye and nose anesthetic, used to block nerve signals during surgery, but side effects of cocaine exposure during surgery include intense vasoconstriction and cardiovascular toxicity. It is a powerful nervous system stimulant, and above all, it is extremely addictive. Repeated use of the drug can cause strokes, cardiovascular disease, and several hundred other afflictions such as gingivitis, lupus, and an increased chance for heart attacks. Cocaine can be administered in many different ways, most commonly through insufflation, injection, and in the case of crack cocaine, inhalation. Cocaine is a controlled substance around the world due to its addictive properties and terrible side effects of constant use.

How To Use It:

Most users of pure cocaine are drug addicts, but cocaine hydrochloride is still used as a topical anesthetic. It is applied through the mouth, or the nose using a cotton swab to numb the area. It should not be used in the eye or injected, and rarely, addictive behavior will be expressed by the patient. Use the medication as specified by a healthcare professional, and do not use more frequently or longer than specified.

Molecular Structure:

Cocaine usually contains pure C17H21NO4 from the leaves of the coca plant.]

2D structure                                              3D structure

Properties:

  • The molecular weight of cocaine is 303.35 g/mol.
  • The molecular formula is C17H21NO4
  • The systematic name is Methyl (1R,2R,3S,5S)-3-(benzoyloxy)-8-methyl-8-azabicyclo[3.2.1]octane-2-carboxylate
  • Approximately 35.9 million Americans aged 12 and older have tried cocaine at least once in their lifetime, according to a national survey, and about 2.1 million Americans are regular users

Novocain (Procaine):

First synthesized in 1905, novocain (the trade name of procaine) is an ester-type local anesthetic that is able to induce a loss of sensation when injected, as opposed to oral intake which has been stated to wield therapeutic values. The first synthetic local anesthetics to be produced, novocain was primarily utilized for oral surgeries in dentistry however due to ester-type anesthetics having generally a high potential of causing allergic reactions, it eventually became obsolete and eventually replaced by a more effective anesthetic known as lidocaine. Ester-type anesthetics are more prone causing allergic reactions compared to Amide-type anesthetics because when they metabolize in the body, they form a compound known as para-aminobenzoic acid (PABA). PABA has a documented history of causing allergic reactions that range from urticaria to anaphylaxis. Generally, the adverse side effects of using novocain include heartburn, migraines, nausea, and can induce a serious condition known as systemic lupus erythematosus (SLE), therefore it is highly advised that intake is performed by a healthcare professional. However, novocain also retains the property and advantage of constricting blood vessels, reducing bleeding unlike many other local anesthetics.

How To Use It:

The common and primary method of intake of novocain for its anesthetic properties is through injection in solution state. However, if novocain is present in capsule or tablet form, oral ingestion can also performed though its properties and effects will be greatly mitigated and may induce therapeutic rather than anesthetic conditions.  An informative video of how novocain is administered in oral surgeries of dentistry can be found  below.

Molecular Structure:

Novocain contains pure C13H20N2O2.

2D structure                            3D structure

Properties:

  • The molecular weight of novocain is 236.31 g/mol.
  • The molecular formula is C13H20N2O2.
  • The systematic name is 2-(diethylamino)ethyl 4-aminobenzoate
  • The melting point of novocain is approximately 61 °C while its pKa value at 15 °C is 8.05

 

Tetracaine:

Tetracaine is a type of local anesthetic and it is used as a numbing medication. It is generally used for surface and spinal anesthesia and it works by blocking the nerve signals in your body. There most used type of tetracaine medication is cream and ointment. It’s primary use is to reduce pain or discomfort caused by minor skin irritations, cold sores or fever blisters, sunburn or other minor burns, insect bites or stings, and many other sources of minor pain on a surface of the body. The reason why this medication is given is to lessen the pain caused by the insertion of a medical instrument such as a scope or a tube. Although in most situations tetracaine is used on the skin, it can also be used on the eye. This eye medication is in the form of drops and it is used to decrease the feeling in your eyes right before going through surgery or perhaps a test or procedure involving the eyes.

How To Use It:

The eye drops medication should be issued by the clinic and after going through the procedure, the patient must refrain from touching his or her eye until the medication is no longer in effect and in some cases, an eye patch is required. The Tetracaine topical gel is applied by very small amounts only necessary to cover the area and should not be used more than four times a day unless the doctor specifies otherwise.

Molecular Structure:

Tetracaine contains more than 98 percent of .C15H24N2O2  calculated on the dried basis..

2D structure                                                      3D structure

Properties:

  • The molecular weight of tetracaine is 264.36 g/mol.
  • The molecular formula is C15H24N2O2.
  • The systematic name is 2-(dimethylamino)ethyl 4-(butylamino)benzoate.
  • The boiling point of tetracaine is between 362.4 degrees Celsius and 416.4 degrees Celsius at the standard 1 ATM.

The Antidepressant

Kinetic and mechanistic evaluation of antidepressant medication

A Brief Overview 

Neurons in the human brain transfer information through an electrochemical process that culminates in the brain interpreting the transmitted data.  Between normal human neurons there exists a synapse through which envoy neurochemicals cross. The presynaptic, or initial neuron taking part in communication, produces chemical courier neurotransmitters. After being transported to the neuron’s external surface, these neurotransmitters are sent into the synapse and find a receptor area on the secondary, or postsynaptic, neuron. By doing so, the chemical messengers have now relayed their message, which will catalyze processes in the secondary neuron, among which include further construction of new neurotransmitters. When a surplus amount of neurotransmitters are put into the synapse, the initial neuron has the ability to reclaim this excess. Portions that go through reuptake are destroyed in the neuron and used as crude product for future undertakings. At the origin of antidepressants were the monoamine oxidase inhibitors, or MAOIs, which stemmed from the tuberculosis drug iproniazid. This medication became a treatment for depression, having the ability to obstruct the elimination of recycled neurotransmitters. A heightened sense of positive mood and energy in those who were medicated came from blockage of the enzyme that disintegrated norepinephrine, serotonin and dopamine.

Tricyclic antidepressants

In an analogous manner, tricyclic antidepressants hinder reprocessing of norepinephrine and serotonin, both expanding the success of the message in traveling to the second neuron and permitting neurotransmitter excess to remain in the synapses. Tricyclic antidepressants (TCAs) categorize a set of antidepressant medications that have homologous chemical structures and efficacy. Due to depression’s perceived roots in the disproportion of neurotransmitter levels, tricyclic antidepressants promote levels of norepinephrine and serotonin while impeding the function of acetylcholine. Anafranil, Elavil, Norpramin, Pamelor, Sinequan, and Tofranil are all
commercial names of tricyclic anImipraminetidepressanst that are currently on the market, representing a now aged class of treatments combating depression. Muscarinic, histaminergic and α1-adrenergic receptors are antagonized in the action of classical TCA drugs, leading to anticholinergic (rendering inactive the neurotransmitter acetylcholine), sedative, and cardiovascular effects. In vitro, fluoxetine unites with the aforesaid receptors in the brain tissue with less efficacy than TCA drugs. As identifiable through their names, these TCAs have a three-ring chemical structure. For example,

Mechanism of Action in Tricyclic Antidepressantsin imipramine (tofranil), the crucial portions of antidepressant activity include the ring system, sidechain extent, and location of the substituent groups. In this way, the most vigorously occupied compounds are the secondary methylamines (organic compound) and a small amount of primary amines (functional group with a atom of nitrogen coupled with a lone pair). In terms of sedative action apart from imipramine’s antidepressant properties, the tertiary amines deal with this mechanism while not taking part in the prime purpose.

Mechanism of Action in Tricyclic Antidepressants

Selective Serotonin Reuptake Inhibitors

As opposed to TCAs, there exists a class of compounds termed selective serotonin reuptake inhibitors(SSRIs), now the most prescribed antidepressant medications in numerous countries. In the creation of the SSRIs the method of rational drug design was used for the first time among the psychotropic drug class (psychoactive drugs traverse the blood-brain barrier, affecting the central nervous system of the human body and altering brain activity), where a definitive biological mark was identified and made an objective to a treatment.  An example of a prominent selective serotonin reuptake inhibitor, working by delaying the reuptake of serotonin into the human platelets so the serotonin that is released remains for a longer period of time, is Prozac. The chemical formula of Prozac is C17H18F3NO (systematic name: N-Methyl-3-phenyl-3-[4-(trifluoromethyl)phenoxy]-1-propanamine. Prozac is the trade name for Fluoxetine.  The fluoxetine molecule contains a variety of functional groups. There are two phenyl groups (benzene rings), an ether, and an amine.  Prozac is also a chiral molecule, meaning that they display a symmetry to their mirror image, often causeProzacd by the location of an asymmetric carbon atom in the general structure. This is a feature to be noted due to its usages in inorganic, organic, physical, and biological chemistry. It is metabolized by CYP2D6 by the liver, characterized by its slow rate and a long half-life in the confines of the system. Slow aggregation leads to delay in the manifestation of meaningful effect. It is also an agonist for 5HT2C receptors, linking back to the first blog post on beta-agonists.

Agonists, as aforementioned in the previous post, are chemical compounds that bind to receptor area and initiate the receptor to a form of action. Contrary to an antagonist which thwarts an action, the agonist strength is strongly linked to its half maximal effective concentration, otherwise known as EC50. This is the concentration of the substance that causes an intermediate effect between a minimum and maximal point after a definite period of time. In research regimens that follow a dose response, this represents the 50% efficacy point, and correlates to the IC50 that ascertains a substance’s inhibition. The slowing of the increasing ligand concentration response is an inflection point at which the EC50 occurs. This is pertinent due to the fact that the ligand is the functional group molecule or ion that connects to a center metal atom in the creation of a coordination complex, where there is a transfer of electron pairs from the ligand to the metal element. The bonds created in the process can be characterized as ranging from the covalent strength to ionic bonds, while the bond order is conventionally from one 1-3. In most circumstances, these ligands are also Lewis bases, and in Gilbert N. Lewis’ definition, are characterized as electron-pair acceptors that have the capacity to react with a Lewis base and result in the Lewis adduct. Furthermore, the ligand is what prescribes the reactivity of the central metal atom and redox. In conditions where the oxidation state is unclear, the ligand is non-innocent, present in heme proteins and with redox not focused on the ligand. (The innocent ligand does not change in oxidation state, for example in the the reduction of MnO4 to MnO42-. As you can see, the transformation occurs in the change in oxidation state of manganese from 7+ to 6+. The oxide ligand remains at an oxidation state of 2-, though a meticulous analysis would show that the ligand is changed in an alternate way by the redox).

Fluoxetine3Dan3.gif

3D animated view of Prozac Molecule

The Mechanism of Prozac

 

Quinolone Antibiotics: Medicine at its Best

by Rachel Boutom, Sneha Kabaria, and Andrea  Thomas

Antibiotics have been essential to mankind since the discovery of penicillin, and have since branched out into many classes and thousands of medications. This article will explore the class of antibiotics referred to as “quinolone antibiotics” due to its quinolone nucleus. The quinolone nucleus contains double-ring structure composed of benzene and pyridine rings fused at two adjacent carbon atoms. The benzene ring contains six carbon atoms, while the pyridine ring contains five carbon atoms and a nitrogen atom. There are many variations, four generations, different functions, benefits, and side effects to quinolone antibiotics, and you will learn all about them here.

What is Quinolone?

Quinolone is an antibiotic that works by interfering with DNA replication and bacterial transcription.

http://www.youtube.com/watch?v=3IFSxbEvY7g.

The quinolone carries out this function by inhibiting bacterial DNA Gyrase, which is responsible for the negative supercoiling of the DNA, and bacterial Topoisomerase IV, which is an enzyme needed for the separation of strands after replication during cell division. It was first discovered in 1962 as nalidixic acid, which is considered to be the first drug in the quinolone family. From then, there have been four generations of drugs based on their antibacterial spectrum. There is no set standard of classification system to base the drugs on, however there are general properties that differ.

The earlier-generation agents have, in general, more narrow-spectrums than the later ones. In addition, all non-fluorinated drugs in the quinolone class are labeled as first-generation antibiotics. The majority of quinolone antibiotics used today are fluorinated, meaning they have a fluorine atom bonded to the six-carbon ring. These are called fluoroquinolones. Fluoroquinolones are broad-spectrum antibiotics which are effective for both gram negative and gram positive bacteria, and they play an important role in treatment of serious bacterial infections, especially hospital-acquired infections and others in which resistance to older antibacterial classes is suspected.

 

How is it Synthesized?

All quinolines are synthetic, meaning they do not occur in nature, and thus, must all be synthesized in laboratories. Since the creation of the first quinoline nalidixic acid, over 10,000 analogues and derivative compounds have been developed, and more than 800 million patients have been treated with quinolones. There are many ways of synthesizing this chemical: the Gould-Jacob’s method using esters, hydrolysis, and regiospecific substitution; the Modified Gould-Jacobs method, using Isatoic Anhydride and Sodio-Ethyl Formyl Acetate; and many more.

Pharmacokinetics

The newer fluoroquinolone antibiotics also have improved pharmacokinetic parameters compared with the original quinolones. They are rapidly and almost completely absorbed from the gastrointestinal tract. Peak serum concentrations obtained after oral administration are very near those achieved with intravenous administration. Consequently, the oral route is generally preferred in most situations Absorption of orally administered fluoroquinolones is significantly decreased when these agents are coadministered with aluminum, magnesium, calcium, iron or zinc, because of the formation of insoluble drug.

Because the fluoroquinolones have a large volume of distribution, they concentrate in tissues at levels that often exceed serum drug concentrations. Penetration is particularly high in renal, lung, prostate, bronchial, nasal, gall bladder, bile and genital tract tissues. Urine drug concentrations of some fluoroquinolones, such as ciprofloxacin and ofloxacin (Floxin), may be as much as 25 times higher than serum drug concentrations. Consequently, these agents are especially useful in treating urinary tract infections.

Distribution of the fluoroquinolones into respiratory tract tissues and fluids is of particular interest because of the activity of these agents against common respiratory pathogens. Trovafloxacin penetrates noninflamed meninges and may have a future role in the treatment of bacterial meningitis. The long half-lives of the newer fluoroquinolones allow once- or twice-daily dosing.

Bacterial Resistance

As with all antibiotic medicines, the potential for the development of antibacterial resistant strains of bacteria is always a threat. This has already been found to be a problem with quinolones. Gram-positive and gram-negative bacteria have been reported to be resistant to quinolones, and there are different mutations that cause this. The resistance appears to be the result of one of three mechanisms: alterations in the quinolone enzymatic targets (DNA gyrase), decreased outer membrane permeability or the development of efflux mechanisms. In addition, cross-resistance between quinolones is to be expected in the future.

One of the largest problems with antibacterial resistance is the degree to which the same medication is used, which leads to the problem of the extent to which the resistant strain is spread and recreated in other places. For a long period of time, the increased potency and effectiveness of the newer generation of fluoroquinolones, as compared to the older quinolones, led to an unregulated increase in their use. As they kept working effectively, their use proportionally increased, with a 40% increase in use in the United States during the 1990s. During this period, the rate of resistance to the two pain fluoroquinolones doubled, specifically in areas such as the intensive care units in hospitals.

Antibiotics 101

Antibiotics, an introduction.  

Antibiotics are agents, either naturally occurring or synthetically produced, that kill microorganisms or inhibit their growth. This general definition allows for the further classifications of antibiotics, or antimicrobials, into classes that describe the specific microorganism that each compound targets.  Some examples of these classifications include antibacterials, antifungals, antivirals, and antiparasitics. These antibiotics have had a major impact in the medical community since their widespread usage began in the 1940s. One of the milestones in antibiotics that marked the beginning of its widespread usage was achieved by Ernst Chain and Howard Florey when they were able to develop a powdered form of penicillin by isolating its active ingredient. Penicillin remained at the forefront of antibiotics for decades and was known as the “miracle drug” because of its ability to cure people of previously fatal bacterial infections. So how exactly does penicillin work?

          Put simply, penicillin works by destroying the cell wall of bacteria. It does this by specifically targeting and inactivating the enzyme transpeptidase. Transpeptidase is responsible for the cross-linkage in the bacterial cell wall, and after a nucleophilic oxygen of the enzyme binds with penicillin rendering it inactive, the cell wall of the bacteria ruptures. For more on how this process occurs, check out this link.

          Antibiotics can be manufactured synthetically or semi-synthetically which means that they are unnatural drugs that can be made from non-living components. While semi-synthetic antibiotics are simply made by including an additional step that requires modification of the naturally produced chemicals, synthetic antibiotics are created with a new chemical manufacturing process. In order to understand how semi-synthetic and synthetic antibiotics are made, one must understand how natural antibiotics are made. A sterile and controlled environment is required to produce all kinds of antibiotics in order to prevent external contamination of the product. Natural antibiotics begin with a preparation of a culture of microorganisms. These cultures are constantly fed in fermentation tanks so that these microorganisms reproduce. After several days of supervising the process and controlling temperature, humidity, and other conditions, an antibiotic broth can be run through a filtration system so that it can be purified and the drug can be separated. After confirming that the product is not contaminated, it is ready to be sold worldwide. However, semi-synthetic drugs have a required additional step. Instead of purifying the natural product, it is put through a chemical process that alters the structure of the drug, thus making it not fully synthetic but not fully natural. These alterations of the structure are made in order to have a better effect on infecting organisms or to be better absorbed by the body. Synthetic antibiotics are simpler to produce because thesynthetic drugs are not subject to the natural variations found in living organisms that are used in fermentation tanks.

           There are many different types of semi-synthetic or synthetic antibiotics. For example, the first synthetically manufactured antibiotic is chloromycetin. This synthetically made antibiotic was used to treat ocular infections that involves the conjunctiva or cornea. However, chloromycetin is only used for serious infections when other weaker or less dangerous drugs are ineffective and therefore, chloromycetin is no longer available in the U.S. Ampicillin is a penicillin antibiotic that is derived from basic penicillin nucleus, 6-aminopenicillanic acid. This drug is used to treat infected areas such as urinary tracts. Erythromycin is another synthetically made antibiotic that is part of the macrolide antibiotics. Macrolide antibiotics slow the growth of bacteria by reducing the production of proteins needed by the bacteria to survive. This drug is mostly used by people who are allergic to penicillin. These examples of synthetically orsemi-synthetically made antibiotics are important contributors to treatment of infections of human bodies and are produced worldwide by manufacturers in order to meet the demands of patients.

          Traditionally bacteria are not resistant to antibiotics, in fact antibiotics are used to kill bacteria and other microorganisms. Bacteria tend to multiply by the billion as well as adapt and mutate often. Sometimes, these bacterial mutations make bacterium resistant to antibiotics therefore being harder to treat. Resistant bacteria that is not treated by antibiotics further multiplies. Antibiotic resistance leads to not only the misuse but also the overuse of antibiotics and vaccines which further leads to the creation of superbugsSuperbugs are formed when the gene that carries bacterial resistance is transferred or carried between bacteria so that there is a creation of bacteria with antibiotic resistant genes for many antibiotics. The most common types of superbugs are methicillin- resistant Staph aureus (MRSA) and multiple-drug or extensively drug resistant tuberculosis (MDR-TB and XDR-TB).

          Bleach is a commonly known disinfectant of bacteria. The active ingredient in bleach known as hypochlorous acid is what disinfects or causes the unfolding of proteins in bacteria which then clump together into a mass in living cells. This process is similar to the process of boiling an egg. The boiling process denaturizes the proteins of the bacteria in the egg making it safe to eat just as bleach does. As the bacterial proteins unfold, heat shock protein or Hsp33 is put into effect and protects proteins from the aggregation effect and further increases bacterial bleach resistance. Commonly, bleach’s base acidity also tends to compromise a bacteria’s lipid membrane, a process that is similar to the popping of a balloon. Overall, bleach is an extremely versatile disinfectant that kills a broad range of bacterium.

Check out this quick video on antibiotics in the meat industry.

 

Deception between Orthopedic Implants and the Human Body

In our last blog post, we discussed how nanocages were formed through manipulations of intermolecular forces. A compound that deals with intermolecular forces as well is known as hydroxyapatite. It is well-known for its applications towards its improvements to prosthetic implants. Being chemically similar to the mineral component of bones and hard tissues, it is one of the few materials classified as bioactive; this means it supports living tissue, bone ingrowth, and the direct structure and functional connection between living bone and the surface of a load-bearing artificial implant. Before we can descend further into its contributions to the world of medicine, we need to have a more in-depth understanding of the properties of hydroxyapatite.

So, what is hydroxyapatite?   

Hydroxyapatite (Ca10(PO4)6(OH)2) is a form of calcium phosphate with a broad range of applications. It can separate/purify proteins, assist bone implants, and can also be applied in drug delivery systems (like the nanocages in our previous post).

Figure 1: Unit cell of Hydroxyapatite

Part of what makes hydroxyapatite (HA) so useful in implants is that it is readily accepted by the body. The body tends to reject foreign bodies implanted into it. HA, however, is actually a major component of bone. Because the body is already used to seeing HA, it doesn’t react violently at all with it.

However, it is not so easy to create the HA coatings for marketable use. The only method commercially acceptable is plasma spraying. The process involves a number of variable intricacies, small changes to which can drastically affect the final outcome. This is especially troublesome given that, at the temperatures at which plasma spraying operates (Over 800°C) HA begins to decompose. Until other, more manageable methods for the synthesis of HA become commercially acceptable, the widespread use of the coating in the medical field is still a lofty goal.

How does it work?

First of all, layers of more than 20 µm of HA have to be applied to the implant. Simply coating, however, exposes the risk of bacterial infection, easy exfoliation of the coating layer, and non-homogeneous coating thickness and chemical compositions. This can be avoided through the creation of a hydrophilic surface. Promimic, a biomaterial company, developed a way to transform a surface to a super hydrophilic surface. Using a special coating procedure, they were able to produce a uniform layer of only 20 nm. Because this is so thin, it does not risk exfoliation. Furthermore, the presence of this surface produces an osteoconductive surface, and is thus used on titanium implants.

So how does HA actually work? In the late 1980’s, new synthesis procedures yielded HA in hexagonal-cross section microrods, which then could be sintered at high temperature with spheres of similar diameter to bond their structures. The functional groups of HA consist of positively charged calcium ions and negatively charged oxygen atoms. When brought near alkaline protein or acidic protein, the HA interacts with the protein. This is done through ionic and hydrogen bonding. The exchange between cation and crystal phosphates, and anion and crystal calcium ions promote these bonding and help the bioactive material serve as the connection between living bone and the artificial implant.

Ionic and hydrogen bonding are important concepts in usage of HA, because it’s imperative that the intermolecular forces between the HA and protein clusters overcome the electrostatic repulsion they feel between the cations and cations, and anions and anions. Calculations must be done in order to ensure the interaction is successful in keeping the structures together.

Applications of Hydroxyapatite

Although nanosized HA is not yet commercially available as a competitive material with respect to other forms of HA, nanosized HA shows promise for many future applications. This includes faster implant surface turnover, bone replacement, provide scaffolding properties required in tissue engineering applications, drug delivery systems like intestinal delivery of insulin, and use in genetic therapy for certain types of tumors. Some of these examples will be discussed more in depth in our next blog post.

Pharmacokinetics: The Kinetics behind Anesthetics

Kinetics plays a large role in understanding how the body eliminates anesthetics and therefore is imperative in determining the rate of administration during surgery that will keep the patient sedated.  This also applies to all drugs to ensure that the drug concentration remains within the therapeutically defined parameters.  This specific study of kinetics is called pharmacokinetics.

One key principle behind pharmacokinetics is the use for monitoring for determining the quantity of drug present in comparison to the quantity required for the desired effect.  The monitoring of these drugs in the body reveals how the body metabolizes these compounds.  The body absorbs, distributes, bio-transforms, and excretes these drugs and monitoring can allow for observations of the the concentration of the compound to allow for conclusions to be reached.  These enzymatic reactions in the body demonstrate that most drugs are broken down according to first-order kinetics.  The rate of metabolism in this order reaction depends on the concentration of the drug in such a way that a higher concentration is metabolized at a higher rate while a lower concentrations is metabolized at a slower rate.  This property is beneficial in that it allows the body to better prevent overdosage through the accumulation of the drug; however, this property makes it difficult to maintain drug concentrations especially when relatively high concentrations are required since the body counteracts it.

The other possible order in pharmacokinetics is a zero-order reaction.  Zero-order reactions occur at a constant rate and are independent of the concentration of the drug being metabolized.  Although most drugs follow first-order kinetics, there are several notable exceptions.  One exception to this rule is phenytoin.  Phenytoin, also known as sodium 5,5-diphenyl-2, 4-imidazolidinedione, is an injectable drug used in the treatment of seizures.  However this drug poses a serious threat of toxicity in the blood due to its metabolization being a zero-order reaction and therefore has the warning stating that the intake of this drug should be less than 50 mg/min.

Still, these kinetic models do not perfectly explain how the human body reacts with these compounds.  One major building block in pharmacokinetics deals with the first-pass effect.  This effect essentially explains how when compounds enter the human body through certain methods, the compound passes through the liver.  On the initial passing, the liver substantially metabolizes the drug before it even gets into circulation which is the foundation of the first-pass effect.  The next passes of the liver may not be as efficacious as the initial pass.

Overall, this understanding of kinetics plays a crucial role in the safe use of anesthesia for medical purposes.  This understanding can then be applied to almost all drugs and compounds that enter the body.

The Placebo Effect: Is it Really Mind Over Matter?

What is the placebo effect?

In both medical practice and research, a placebo is considered an inert substance, or medical dose that is identical in odor, appearance, and taste to an active drug.  Clinically, a placebo is defined a substance with no known medical effect that is administered as a control in an experiment to determine the effectiveness of a medical drug. Many times, a placebo simply is a sugar pill. Professionals across several fields are aware that placebos of this definition can cause what is know as the placebo effect. This effect is essentially clinical patients reported the effective treatment of the active drug when a placebo is used. A basic explanation of the idea of a placebo can be found in this video: The Strange Powers of the Placebo Effect. While this effect was originally thought to only be a perspective to the easily tricked human mind, this article from WebMD describes how placebos actually cause physical changes. Many studies can now prove how the expectation of pain relief or other desirable outcomes can actually cause physical changes to how the brain responds to perceived pain and other ailments. PopSci offers this article which explains how functional MRI (fMRI) scans are used to locate the placebo effect’s neural activity in the spinal cord. In the fMRI scans, light patterns are able to depict the specific cells and areas that are responsible for a placebo’s ability to decrease pain. The relief is caused by chemical changes and electrical impulses in the prefrontal cortex. This area of the brain can essentially desensitize or reduce the activity of pain-sensing areas of the brain. While most of this information is well known, much debate comes into play when discussing how, at a very minute level, this effect works.

How does it work?

For the longest time, the placebo effect was labeled to be a psychological phenomenon with no neurological support. This has now since changed. In fact, with various techniques of detecting brain activity, researchers have been able to pinpoint the neurological circuit that is activated during the placebo effect: the reward response and motivated behavior. Specifically, detailed by Archives of General Psychiatry the anterior cingulate (top left), orbitofrontal and insular cortices (top right), the nucleus accumbens, the amygdala (bottom left), and the periaqueductal gray matter (bottom right) are areas that are stimulated.

This stimulation is due to endogenous opioid neurotransmission and dopaminergic activation. During this time, the µ-opioid receptor binding potential decrease 10%-26%, reducing the pain experienced. Let’s take a closer look at the opioid neurotransmission mechanism.

Screen Shot 2014-04-22 at 9.17.04 PM.png

An agonist binds to a guanosine nucleotide-binding protein-coupled receptor (A), which activates the G protein by switching the GDP to GTP (B). G subunits will stimulate effectors within the receptor, which have effects that include the activation of K+ channels to make neurons less active (C-E). Phosphorylation will occur at the C-terminal of the receptor, which incites the regulating signal transduction proteins to bind to these ends (F). A phosphorylation of dynamin that occurred earlier (D) results in an endocytotic vesicle closing (H). The receptor dephosphorylates (I) and it is reinserted into the neuron (J). Thus, this entire process is one of many that occurs during the placebo effect, which accounts for an effective treatment without medication. Check out the review article Brain for more information.

As for the dopaminergic activation mechanism, it is similar to both the above cycle and the previous blog on midazolam. Dopamine binds to dopamine receptors on a neuron, which activates many other reactions. Some of these include increased ATP production, regulation of ion gates in the neurons, and increased cognitive capability, which amalgamate with other factors to create the placebo effect. However, if you want a more detailed explanation, you can read a book by Dr. Natarajan that is solely dedicated to the dopamine-mediated activation.

Why does it work from an evolutionary standpoint?

It is possible to trace back various stages of the use of placebos throughout history. Beginning examples most often include the idea of spiritual healing. At this time, it was not understood how exactly placebos worked. However, it is and would have been possible to understand why they work. Considering that the function of placebos releases natural chemicals in the body, this process could have changed slightly from year to year. Like many other characteristics, the effective use and response to placebos can be traced as an evolutionary trait. This video by The Royal Institutiondoes an excellent job explaining why the effect works: essentially for survival. It would make more sense for the body to naturally alleviate pain, without a placebo. This does not happen. In order to understand why the body works the way it does, it is much easier to view the situation as a cost benefit analysis. When the benefit of self pain relief outweighs the negative ones. For example, if a body finds it more beneficial to express the sensation of pain (to prevent further injury), than to continue forward (relief pain so the body can function), then the placebo effect may not work as well. Following this logic, the idea of the placebo effect, or self alleviation, should work better when the pain being experienced puts and individual at a greater threat than the pain itself.

Examples In Practice

Research conducted at the University of Michigan and to be published in the Journal of Neuroscience was based off of a controlled experiment conducted on 14 men ages 20-30 years. The men were caused pain via salt injection, then injected with a “pain-killing” placebo. Brain scans showed the release of natural pain relieving endorphins after the placebo was administered. USA Today offers more information on the study.

Ted Kaptchuk  of Harvard conducted a study in which patients with severe arm pains were given two different treatments, pain relieving pills and acupuncture. The results of this research, in which bothtreatments administered are placebos, is astounding. Harvard magazinecovered the story in this article.

Is it ethical?

When the ethics of the placebo are discussed, it is usually in the context of research purposes. However, the much more interesting perspective to argue is whether or not it is ethical to use the placebo effect in clinical cases, given the current understanding of placebo effects. Unlike in the cases of research, in clinical cases, doctor use or “prescribe” a placebo in hopes of positive results. The question that arises then is, in what situations is it ethical to use, or to not use a placebo in place of an active drug during clinical practice? Most arguments for or against the use of placebos in clinical trial focus on whether or not a given placebo is effective. However, given the nature of the placebo, this is a difficult argument to make. While previous knowledge gaps have prevented the use of placebos in practice, it now proves imperative in many clinical situations and should not be denied a place in medical treatments. The Journal of Medical Ethics offers interesting insight into situations in which placebo treatment could be a legitimate therapeutic option, and also when it could be essentially considered a required treatment. In addition to providing example cases, the journal offers practical guidelines to be followed with the use of a placebo in clinical practice. These guidelines include:

  1. The intentions of the physician must be benevolent: her only concern the well being of the patient. No economical, professional, or emotional interest should interfere with her decision.

  2. The placebo, when offered, must be given in the spirit of assuaging the patient’s suffering, and not merely mollifying him, silencing him, or otherwise failing to address his distress.

  3. When proven ineffective the placebo should be immediately withdrawn. In these circumstances, not only is the placebo useless, but it also undermines the subsequent effectiveness of medication by undoing the patient’s conditioned response and expectation of being helped.

  4. The placebo cannot be given in place of another medication that the physician reasonably expects to be more effective. Administration of placebo should be considered when a patient is refractory to standard treatment, suffers from its side effects, or is in a situation where standard treatment does not exist.

  5. The physician should not hesitate to respond honestly when asked about the nature and anticipated effects of the placebo treatment he is offering.

  6. If the patient is helped by the placebo, discontinuing the placebo, in absence of a more effective treatment, would be unethical.

Other questionable ethics that come into play are charging patients for a treatments with no known medical effect, or the possibility of a placebo not working on a test subject who is injured, ill, or in pain. Additional arguments on the ethics of placebos can be found in discussion ontheconversation.com.

Bringing Antibiotic to the Counter: What Does It Take To Produce Antibiotics?

So far, we’ve talked a lot about antibiotics. But, how did they end up at your neighborhood pharmacy? Well, read further and you’ll learn about what it takes to produce the medication that helps you feel better when you’re not well.

The antibiotic production industry is definitely a lucrative one. Maybe that explains the 10,000+ antibiotics on the market today. Despite their abundance of variety, the production of them is not the simplest. A part of antibiotic productions involves a process called fermentation. However, the processes that occur differ depending on the type of desired antibiotics. It makes sense that there are different processes for antibiotic topical ointment and swallowable tablet antibiotics.

The first step in producing antibiotics is research and testing. This part of production is long-lasting and costly. This is because it requires thousands of organisms. Sometimes, the organism found to produce antibiotic compounds has already been discovered and its back to the drawing board. If the organism being tested produces an original antibiotic compound, a lot of clinical testing and federal regulation and approval is involved.

Fermentation

adv chem 5.1.jpg

But, how do we extract this antibiotic compound from the organism on an industrial scale? This is where the fermentation process becomes essential to the production of antibiotics. In sterile conditions, the organism is grown and the antibiotic agent it produces is isolated. Some raw materials are used to create what is referred to as the fermentation broth. This broth is like a bath for the antibiotic producing organisms to grow in. It is composed of some carbon-compound such as molasses or soy meal to act as nutrition for the organism. These compounds are especially significant because they contain both lactose and glucose. Additionally, ammonia is added in order for the organism’s metabolism to run more efficiently. To regulate organism growth, water soluble salts such as zinc, iron, sulfur, copper, phosphorus and magnesium are added. However, another issue arises as the broth is made. The fermentation broth begins to foam. To counteract the foam, compounds containing silicones or lard oil are used.

The figure above puts the entire process into a visual display of fermentation that provides a more coherent idea of the production of antibiotics.

 

Post-Fermentation

adv chem 5.2.jpg

After the fermentation process, the broth is allowed to settle for three to five days, as this is when there will be a maximum amount of antibiotics. From here, the broth must be isolated and purified, whether it be through ion-exchange methods for water-soluble antibiotics or solvent extraction methods for oil-soluble antibiotics. At the end of both purification methods, the antibiotic yielded is in a purified powder that can be further refined into different products. For example, they can be made into solutions for IV bags or syringes, solid form for capsules and pills, and even a ground powder for topical ointments.

 Preparation of Antibiotics from Purified Powder

To make the IV bag or syringe, the antibiotic is dissolved into a solution so that it can be administered directly through a vein in the body. Capsules can be created by filling the bottom half of a capsule with the powdered form and closing it off with the top half of the capsule for oral administration. Finally, for ointments and topical medicine, the antibiotic powder itself is mixed with the ointment, whether it be in cream, gel, or lotion. After this final manufacturing, the melting point and pH are tested throughout shipment to ensure quality control and purity.

Neosporin and pills are common forms of antibiotics made from purified powder to create topical ointment and capsules.

Kinetic Instability of Fermented Substances

Although the fermentation process is efficient, the isolation of the antibiotic itself from the chemical solution is very inefficient because the antibiotics are extremely unstable. In fact, the half-life of thienamycin, one of the most potent natural antibiotics known to man, at a pH of 6-8 is approximately 0.3 to 6 hours, varying on the conditions applied to the solution. Additionally, olivanic acids have a chemical degradation half-life ranging from 4 to 27 hours. This kinetic instability makes it quite difficult for this process to run efficiently, but research is being conducted on making this process more efficient and useful for human consumption and the antibiotic industry as a whole.

Future of Antibiotics

So now that we’ve taken a look at how antibiotics are produced today, let’s see what can be done in the future. This really cool journal explains how synthetic biology coupled with rational engineering can help pharmaceutical engineer go beyond the status quo.

Mutagenesis is a process in which organisms’ genetic information is altered to the point where the organism is still living. This alteration results in a mutation because of its exposure to mutagens. These mutations actually work to produce antibiotics more efficiently. No wonder they there are plans to bring back.

This figure shows how the process of industrial antibiotic manufacturing has evolved over time.

Still interested? Of course you are. For some more information on the processes of antibiotic manufacturing check out the following links:

The Synthesis of Antibiotics

Industrial Antibiotic Production (download)

The Chemistry of Digestion: Exposing the True Secrets of the Stomach and Intestines

Digestion can be simply put as a catabolic process of breaking down the foods we eat, filtering out nutrients necessary in the body, and expelling waste material. This process is important because it provides all of the energy we use to function daily. even when organisms consisted of a mouth and anus with an esophagus connecting the two, digestion was crucial in providing the organism with energy, eventually allowing organisms to develop more complicated features. 95% of nutrients get absorbed in the small intestine. Digestion takes on average about 53 hours, with around 40 hours spent in the large intestine.

The different processes of digestion for types of biomolecules

The Chemistry of Digestion

There are many chemicals that play crucial roles in digestion in order to break down food and separate the nutrients our body needs to absorb from the waste material which will eventually be removed from the body. Digestion can be broken down into mechanical and chemical digestion. They occur in conjunction at certain times, and alternately at other times. The major components of mechanical digestion are the chewing of the teeth, peristalsis in the intestines, churning of the stomach, and the separation of fat by bile in the small intestine. Initially, the mouth performs mechanical digestion, or mechanically breaks down the foods we eat by using the teeth to chew them into smaller pieces.The food is also exposed to saliva in the mouth, which contains amylase.This catalyzes the reaction that breaks down starch into sugars. This could form smaller disaccharides such as maltose, or even monosaccharides such as glucose. Whatever the case, this enzyme consists of three domains, where the A domain the hydrolysis of starch.

Hydrolysis

This is the use of a water molecule to break bonds. In the stomach, multiple enzymes are added which break down the food, around 400-800 mL of stomach acid per meal. This “gastric” juice consists of hydrochloric acid at concentration 0.1M, as well as some sodium chloride and potassium chloride. One of the most critical of these enzymes is pepsin, which is initially added as pepsinogen by the “chief cells” and converted to pepsin as the pH is lowered to around 2 by hydrochloric acid released by the parietal cells, which also stops the activity of amylase. This is important since pepsin is initially secreted as a zymogen, an inactive form. Otherwise, the enzyme would digest parts of your body. Thus, it is only activated when inside the confines of the stomach where the pH is low enough to turn pepsinogen into pepsin.Thus chemical reactions not only help digest the food, but also control when and where it occurs.

Caption: Pepsinogen and Pepsin

Salivary amylase

Bile AcidsParietal cells also release the protein intrinsic factor, which aids the large intestine in the absorption of Vitamin B12. Other cells secrete mucous which protects the cells lining the stomach from activity of pepsin.

Yet another class of chemicals that aid in digestion are the bile acids. Found naturally in bile, a digestive fluid produced by the liver that helps the small intestines digest lipids, the bile acids play two important roles in digesting food: Bile acids are amphipathic, meaning that they can both donate and accept protons due to the bile acids. One known effect of bile in the digestive process is the “emulsification of lipid aggregates.” Here, similar to how washing detergent acts, the bile acids break fat molecules down into microscopic droplets. Another astounding property of bile acids is their ability to solubilize a great variety of lipids. Essentially, the bile acids solubilize normally insoluble compounds by surrounding them similarly to the chemical structure of a  micelle. Thus, bile acid is extremely important as it aids in digestion in more ways than one.

The Intestines

In the small intestine, wrinkles in the walls called villi release intestinal enzymes which finish the digestion of proteins and carbohydrates. In the duodenum, the first part of the small intestine, sodium bicarbonate from the pancreas neutralize the pepsin, chyme (basically food stuff mixture), and HCl from the stomach while watery mucous released by the duodenum protects the intestine. Bile from the liver emulsifies or break down fat molecules until fat-digesting enzymes (Lipase) can act upon them and also help neutralize the acids from the stomach. Other intestinal enzymes (Protease-proteins, amylase-carbohydrates, maltase sucrase lactase -sugars, peptidase -peptides, nuclease – nucleic acids in sugars) break down sugars and peptides, eventually finishing the digestion of carbohydrates and proteins. Hormonal secretions are released in the intestine in order to control the flow of food and ensure all of the nutrient can be absorbed. Secretine, released by the duodenum and triggered by food passing into the small intestine, controls the secretion of sodium bicarbonate and stops the addition of stomach content into the intestine until the previous content can be neutralized, protecting the intestine from the stomach’s acidic conditions. Gastrin in the stomach is triggered by proteins entering the stomach and triggers the addition of gastric enzymes into the stomach. Cholecystokinin (CCK) in released by the small intestine and triggers the release of bile and pancreatic enzymes into the small intestine.

Finally, in the large intestine, any remaining nutrients such as vitamins and any water is absorbed, and the waste material is expelled from the body as bowel movement.

The different parts of the digestive system


Conclusion

https://www.youtube.com/watch?v=v2V4zMx33Mc

From the moment food enters your mouth, to the moment it leaves your body as waste material, a large variety of chemical reactions are occurring. Mechanical and chemical digestion break down the food you consume, allowing for it to be used by your body. Food is broken down, increasing the total surface area, allowing for contact with the enzymes. Nutrients and minerals are absorbed through various processes, made possible by the cohort of enzymes catalyzing reactions. Without the well-coordinated series of chemical processes that must happen, food would merely pass through your body without acting as sustenance.

General Anesthesia: The Loss of a Body

For nearly a century, general anesthetics have been frequently used in medical procedures to completely eliminate pain and suffering of surgical patients. The process itself involves the administering of general anesthetic drugs to induce conditions such as amnesia, muscle paralysis, sedation, and analgesia. As opposed to the other departments of anesthesia previously discussed (local and regional anesthesia), general anesthesia places the patient in a complete state of unconsciousness, thus rendering all areas of the body unable to feel painful stimuli. After undergoing general anesthesia, the patient is in a state that is characterized by the following: unable to respond/feel to pain, unable to remember recent events due to the inducement of amnesia, unable to breathe or move as a result of lingering muscle paralysis, and susceptible to cardiovascular changes caused by the side effects of in taking general anesthetics.

How it is taken

The two most common methods of receiving general anesthetics in the medical community are as an inhalant via an endotracheal tube (provides anesthetic and oxygen) or taken intravenously through an IV line. In some cases, the two mechanisms are actually used simultaneously in an operation, where the intravenous injection begins the procedure by inducing initial unconsciousness while exposure to anesthetic inhalants prolong and sustain the effects. After the surgery is completed, the gasses and IV line are terminated. The patients are then taken to a PACU (post-anesthesia care unit) to recover from lingering effects of general anesthesia for a period of time, dependent on the magnitude of the operation or tolerance of the individual to anesthesia. Such symptoms that appear post general anesthesia include vomiting, nausea, sore throat, and incisional pain. In regards to the recent era, general anesthesia has had a relatively low rate of mortality (1:100,000) due to advances in technology and in the medical world.

Mechanism of Action

It is not fully understood regarding the mechanism of action of a general anesthetic compound when inserted into the body. However, decades of use and research have led to several theories. One popular theory (aside from interaction with glutamate-activated NMDA ion channels) is the interaction of a general anesthetic with the GABA receptors in the brain. At a molecular level, anesthetics are able to induce their effects because they tamper with the functions and behavior of neurons. Neurons are the source of our daily consciousness, complex thoughts, and general mental capabilities so by altering neuron functionality, specifically the ion channels within the neurons, anesthetics are able to induce a temporary loss of feeling.

As seen in previous blog posts, anesthetics change the electrical activity (or electrical excitability) in ion channels through the control of ion flow (excitatory or inhibitory ions) across the neuron cell membrane. For general anesthetics, effects are procured primarily through either the enhancement of inhibitory signals or the blockade of excitatory signals in GABA receptors (ion channel). These receptors are part of a Cys-loop superfamily, characterized by a disulfide bond between two cysteine residues, of ligand-gated ion channels and are composed of a combination of trans-membrane polypeptide subunits.

Due to the fact that they are integral to the functionality of the central nervous system, their function primarily involves with memory, awareness, and consciousness. Subsequently, the fact that the interaction of general anesthetics with these GABA receptors, by reducing excitatory and increasing inhibitory ion flow, induces unconsciousness and temporary amnesia is of no surprise and definitely a plausible theory. GABA receptors are very sensitive to the presence of general anesthetics. The molecules of these anesthetics tend to bind at sites within the receptor thus modulating the actions of the GABA receptor. Specifically, in the presence of general anesthetics, the ability of the GABA receptor to open its ion channel is increased thus increasing the inhibitory activity of the receptor.

Advantages/Disadvantage

Though general anesthesia induces complete unconsciousness and lack of response to painful stimuli, it may not be the best course of action because it affects the entire body and each patient has their own unique medical condition. Choosing to undergo either local or regional anesthesia may ultimately be a better option, ensuring safety of the patient. The advantages and disadvantages of general anesthesia are the following:

Advantages:

  • It is a reversible process that can be administered very quickly
  • If a patient has some sensitivity/allergic response to local anesthetics, general anesthesia can be used.
  • Very low probability of patient being able to recall moments of the procedure and sustain consciousness during the process.
  • Easily adaptable

Disadvantages:

  • Complex process that demands intricate care by the medical professional and costs relatively high for the patient.
  • Chance of causing malignant hypothermia; a rare muscular condition in which exposure to some general anesthetics lead to dangerous temperature rise, hyperkalemia, hypercarbia, and metabolic acidosis.

Desflurane

 Desflurane (also known suprane) is a common general anesthetic that is a nonflammable liquid at temperature below 22.8°C but is administered as an inhalant using a vaporizer. A fluorinated methyl ethyl ether, desflurane has a chemical formula of C3H2F6O and has a relatively low solubility in blood therefore making it ideal for general anesthesia. An interesting yet unfortunate drawback of desflurane is its tendency to react with carbon dioxide absorbents to produce carbon monoxide, which may result in an increase level of carboxyhemoglobin in patients thus limiting the capacity of regular hemoglobin to bind and deliver oxygen to areas of the body.

 The anesthetic drug is indeed a recent discovery and has been widely used/purchased in the commercial medical market. Subsequently, multiple routes for synthesizing desflurane have been discovered and patented. One process is the preparation of desflurane from the treatment of isoflurane with hydrogen fluoride in the presence of antimony pentachloride: CF3CHClOCHF2+HF+SbCl→ CF3CHFOCHF2+HCl. In most cases, to conduct the reaction, the hydrogen fluoride is added to a mixture of isoflurane and antimony pentachloride. It is recommended that HF in its liquid state be utilized and added to the mixture at a rate of 0.25 to 0.5 molar equivalents per hour. Since the reaction is endothermic, precautions must also be taken so that the temperature is maintained at about 9 to 18° C. The entire process is then conducted in a reaction vessel that is inert to the reagents. Materials recommended for the composition of the reaction vessel are the following: polytetrafluoroethylene, carbon steel, copper, and nickel. These are just several steps to ensure that the amount of desflurane produced is optimized. Another process that synthesizes desflurane in a relatively inexpensive and environmentally safe manner is reacting hexafluoropropene epoxide with methanol to form methyl 2-methoxytetrafluoropropionate which is then hydrolyzed to develop an acid. The acid is then decarboxylated to create an ether, which is then chlorinated to form CF3CFHOCHCl2. The last step requires the reaction of the previous product with a fluorinating agent to ultimately produce desflurane. In the chlorination process, the reduction is executed by illuminating the reaction with UV light in the presence of a lower alkanol. One reason why this route of synthesis is favorable is due to the advantages of using hexafluoropropene as a starting substrate. It is chemically stable, abundant, relatively inexpensive, and environmentally friendly.

Conclusion

Whether it is local, regional, or general, the discovery and implementation of anesthesia in the medical community has significantly changed how surgical operations are performed and the efficiency at which they are executed. Anesthetics are truly mind-boggling and remarkable products of chemistry, wielding the powerful ability to make the body unable to experience pain. The fact that a mere molecule is able to alter the way we think, move, and feel by interacting with receptors in our brains is extraordinary, a fact that seems supernatural. Over the course of our blog posts, we hope that you were able to learn something new about anesthesia and partake in our fascination with the subject. The era of anesthesia has just begun; a boundless future awaits us.