Murphy v. Twitter, Inc. ( 2021 )


Menu:
  • Filed 1/22/21
    CERTIFIED FOR PARTIAL PUBLICATION*
    IN THE COURT OF APPEAL OF THE STATE OF CALIFORNIA
    FIRST APPELLATE DISTRICT
    DIVISION ONE
    MEGHAN MURPHY,
    Plaintiff and Appellant,            A158214
    v.
    (San Francisco City & County
    TWITTER, INC., et al.,
    Super. Ct. No. CGC-19-573712)
    Defendants and Respondents.
    When Meghan Murphy posted several messages critical of transgender
    women on Twitter, the company took down her posts and informed her she
    had violated its hateful conduct rules. After she posted additional similar
    messages, Twitter permanently suspended her account. Murphy filed suit,
    alleging causes of action for breach of contract, promissory estoppel, and
    violation of the unfair competition law (Bus. & Prof. Code, § 17200 et seq.)
    based on allegations that Twitter’s actions violated its user agreement with
    Murphy and hundreds of similarly situated individuals. The trial court
    sustained Twitter’s demurrer to the complaint without leave to amend,
    concluding Murphy’s suit was barred by the Communications Decency Act of
    1996 (CDA) (
    47 U.S.C. § 230
    ; hereafter section 230).
    Pursuant to California Rules of Court, rules 8.1105(b) and 8.1110, this
    *
    opinion is certified for publication with the exception of parts II.C. and D.
    Under section 230, interactive computer service providers have broad
    immunity from liability for traditional editorial functions undertaken by
    publishers—such as decisions whether to publish, withdraw, postpone or
    alter content created by third parties. Because each of Murphy’s causes of
    action seek to hold Twitter liable for its editorial decisions to block content
    she and others created from appearing on its platform, we conclude Murphy’s
    suit is barred by the broad immunity conferred by the CDA. In addition,
    Murphy has failed to state a cognizable cause of action under California law,
    and has failed to demonstrate how she could amend her complaint to allege a
    viable claim for relief. Accordingly, we affirm the judgment of the superior
    court.
    I. BACKGROUND
    In February 2019, Murphy filed a complaint against Twitter, Inc. and
    Twitter International Company (Twitter), asserting causes of action for
    breach of contract, promissory estoppel, and violation of Business and
    Professions Code section 17200, the unfair competition law (UCL).
    Twitter operates an Internet communications platform that allows its
    users to post short messages, called “tweets,” as well as photos and short
    videos. Hundreds of millions of active users use Twitter to communicate,
    share views, and discuss issues of public interest. Twitter users can “follow”
    other users and thereby choose whose tweets they want to see.
    Meghan Murphy is a freelance journalist and writer who writes
    primarily on feminist issues from both a socialist and feminist perspective.
    Murphy is also the founder and editor of Feminist Current, a feminist blog
    and podcast. Murphy joined Twitter in April 2011, and used it “to discuss
    newsworthy events and public issues, share articles, podcasts and videos,
    promote and support her writing, journalism and public speaking activities,
    2
    and communicate with her followers.” At the time her account was
    permanently suspended, Murphy had approximately 25,000 followers.
    Twitter had also given Murphy a blue “verification badge,” which “ ‘lets
    people know that an account of public interest is authentic.’ ”
    According to her complaint, Murphy “writes primarily on feminist
    issues, including the Me Too movement, the sex industry, sex education,
    third-wave feminism, and gender identity politics.” In her work, Murphy
    argues “that there is a difference between acknowledging that transgender
    women see themselves as female and counting them as women in a legal or
    social sense.” She “object[s] to the notion that one’s gender is purely a matter
    of personal preference.”
    Beginning in January 2018, Murphy posted a series of tweets about
    Hailey Heartless, a prominent public figure who had been chosen to speak at
    the Vancouver Women’s March in 2018. According to the march organizers,
    Heartless “ ‘self identifies as a transsexual professional dominatrix’ ” and
    “ ‘has over ten years of activist experience in LGBTQ, feminist, sex positive,
    sex worker and labour communities.’ ” Heartless’s legal name is Lisa Kreut.
    Kreut had identified as a male until approximately three years earlier.
    At the 2016 British Columbia Federation of Labour (BCFED)
    Conference, Kreut had helped organize a successful effort to prohibit BCFED
    and its affiliated unions from funding the Vancouver Rape Relief and
    Women’s Shelter, on the ground that it limited its services to biological
    females. Murphy was “intensely critical of the effort to defund the Women’s
    Shelter.”
    On January 11, 2018, Murphy tweeted: “For the record, this
    ‘dominatrix’ was also one of those behind the push to get @bcfed to boycott
    and defund Vancouver Rape Relief, Canada’s longest standing rape crisis
    3
    center. He is ACTIVELY working to take away women’s services and harm
    the feminist movement.”
    In May 2018, Murphy again tweeted about Kreut after Kreut and
    others signed an open letter to organizers of the Vancouver Crossroads
    conference. The letter, which was posted to a website Kreut helped create,
    urged conference organizers to remove a local poverty activist from a panel
    discussion on urban renewal because she was “ ‘a well-documented Trans
    Exclusionary Radical Feminist (TERF) and Sex Worker Exclusionary Radical
    Feminist (SWERF), and is known in the community to promote this
    ideology.’ ” Murphy alleges the letter made clear that it was also targeted at
    her, and that the letter signatories were “urging that she never again be
    allowed to speak in public either.” In response to the open letter, Murphy
    tweeted: “Lisa Kreut and another trans-identified male/misogynist created a
    website in order to libel a local woc activist, and published a letter
    demanding she be removed from a panel scheduled as part of this conference
    . . . . The organizers caved immediately.” A second tweet posted moments
    later said: “The ‘evidence’ provided to claim the activist should be removed is
    almost entirely to do with her activism against the sex trade, then literally a
    few retweets and ‘likes’ from feminists these men don’t like. Seven people
    signed the thing. It’s ridiculous.”
    After Murphy’s May tweets, Kreut contacted SheKnows Publishing
    Network, the company that arranges advertising for Murphy’s blog, Feminist
    Current, to complain about Murphy’s writing. SheKnows responded in July
    2018 by pulling all advertising from Feminist Current and terminating its
    relationship with the site.
    On August 30, Murphy wrote three more tweets about Kreut:
    “ ‘Aaaand look who publicly admitted to going after
    @feministcurrent’s ad revenue in an attempt to shut us down,
    4
    and is now offering tips to other men in order to go after
    @MumsnetTowers’ ”
    “This is Lisa Kreut, @lispinglisa, the male BDSMer who was
    given a platform to promote prostitution at the Vancouver
    Women’s March this year, who led efforts to defund Vancouver
    Rape Relief & Women’s Shelter at BCFED 2016 . . . .”
    “So @BlogHer pulled revenue from a feminist site because a white
    man who spends his energy promoting the sex trade as
    empowering for women and targeting/trying to silence/defund
    women’s shelters, female activists, and feminist media told them
    to.”
    The same day, Twitter locked Murphy’s account for the first time.
    Twitter claimed that four of Murphy’s tweets, the tweet from January 11 and
    the three tweets from August 30, “[v]iolat[ed] our rules against hateful
    conduct.” Twitter required Murphy to delete the tweets before she could
    regain access to her account. The next day, Murphy tweeted: “Hi @Twitter,
    I’m a journalist. Am I no longer allowed to report facts on your platform?”
    Twitter required Murphy to delete that tweet as well on the ground it
    violated Twitter’s “Hateful Conduct Policy” (Hateful Conduct Policy). Twitter
    also suspended Murphy’s account for 12 hours. Murphy appealed the
    suspension, but received no response.
    On November 15, Murphy’s account was locked again. Twitter
    required her to remove a tweet from October 11 that stated, “Men aren’t
    women,” and a tweet from October 15 that asked, “How are transwomen not
    men? What is the difference between a man and a transwoman?” Twitter
    told Murphy the tweets violated its Hateful Conduct Policy.
    The same day, Murphy tweeted: “This is fucking bullshit @twitter. I’m
    not allowed to say that men aren’t women or ask questions about the notion
    of transgenderism at all anymore? That a multi billion dollar company is
    5
    censoring BASIC FACTS and silencing people who ask questions about this
    dogma is INSANE . . . .”
    Four days later, on November 19, Twitter locked Murphy out of her
    account and required her to delete her tweet from November 15 in which she
    criticized Twitter’s actions. Twitter did not identify any rule or policy the
    November 15 tweet violated. On November 20, Murphy was once again
    locked out of her account, and required to delete her two tweets from May
    2018 about Lisa Kreut.
    On November 23, Twitter sent Murphy a private e-mail stating that
    she was being permanently suspended based on a November 8 tweet in which
    Murphy wrote, “ ‘Yeeeah it’s him’ ” over an embedded image of a Google
    review of a waxing salon posted by “Jonathan Y.” (hereafter J.Y.).1 J.Y. had
    filed 16 different human rights complaints under the alias J.Y. against
    female estheticians across Canada for refusing to perform Brazilian waxes on
    J.Y. because J.Y. has male genitalia. On November 8, Murphy posted on
    Twitter, citing J.Y.’s Twitter handle, @trustednerd: “ ‘Is it true that the man
    responsible for trying to extort money from estheticians who refuse to give
    him a brazilian bikini wax is @trustednerd? Why tf is the media/court
    protecting this guy’s identity either way? The women he targeted don’t get
    that luxury.’ ” Murphy then tweeted: “ ‘This is also, it should be pointed out,
    a key problem with allowing men to ID as female, change their names, IDs
    etc. They can leave behind these kinds of pasts (and likely continue to
    1Murphy’s complaint refers to J.Y. as “Jonathan [Y.].” Twitter, in its
    respondent’s brief, states that J.Y. is a transgender woman, Jessica Y.
    Though Twitter does not attribute that fact to any portion of the record, we
    note that Murphy seeks judicial notice of a decision from a Canadian tribunal
    which states “Jessica [Y.] is a transgender woman,” and refers to J.Y. with
    the pronouns “her” and “she.”
    6
    predate on women and girls, where that abuse will be reported as perpetrated
    by a “woman”).’ ” Twitter told Murphy the tweet violated its Hateful Conduct
    Policy.
    As adopted in 2015, Twitter’s Hateful Conduct Policy stated: “Hateful
    conduct: You may not promote violence against or directly attack or threaten
    other people on the basis of race, ethnicity, national origin, sexual
    orientation, gender, gender identity, religious affiliation, age, disability, or
    serious disease. We also do not allow accounts whose primary purpose is
    inciting harm towards others on the basis of these categories.” Murphy
    alleges Twitter amended the policy in December 2017, to add a preface
    stating: “Freedom of expression means little if voices are silenced because
    people are afraid to speak up. We do not tolerate behavior that harasses,
    intimidates, or uses fear to silence another person’s voice. If you see
    something on Twitter that violates these rules, please report it to us.” The
    amended policy also offered specific examples of harassing behavior Twitter
    does not tolerate and added a section on “How enforcement works,” which
    emphasized, “Context matters. [¶] . . . Some Tweets may seem to be abusive
    when viewed in isolation, but may not be when viewed in the context of a
    larger conversation. While we accept reports of violations from anyone,
    sometimes we also need to hear directly from the target to ensure that we
    have proper context.”
    Murphy alleges that in late October 2018, Twitter made “sweeping
    changes” to its Hateful Conduct Policy, “nearly tripling the policy in length,”
    and adding a provision that prohibited “targeting individuals with repeated
    slurs, tropes or other content that intends to dehumanize, degrade or
    reinforce negative or harmful stereotypes about a protected category. This
    includes targeted misgendering or deadnaming of transgender individuals.”
    7
    Murphy alleges Twitter instituted the new policy without providing the 30-
    day notice required by its own terms of service and retroactively enforced its
    new terms against her. Murphy also contends the new Hateful Conduct
    Policy is “viewpoint discriminatory on its face” because it “forbids expression
    of the viewpoints that 1) whether an individual is a man or a woman is
    determined by their sex at birth and 2) an individual’s gender is not simply a
    matter of personal preference,” viewpoints she alleges are “widely-held” and
    “shared by a majority of the American public.” She alleges the new policy
    contradicted “repeated promises and representations” by Twitter “that it
    would not ban users based on their political philosophies or viewpoints or
    promulgate policies barring users from expressing certain philosophies, or
    viewpoints.” She further alleges enforcement of the “ ‘misgendering’ ” policy
    requires Twitter to “engage in active content monitoring and censorship,”
    which its rules previously said it would not do.
    Twitter also amended its terms of service several times between 2012
    and 2017, with respect to its right to suspend or terminate accounts and
    remove or refuse to distribute content provided by users. On May 18, 2015, it
    amended the terms of service to state, among other things: “We may suspend
    or terminate your accounts or cease providing you with all or part of the
    Services at any time for any or no reason, including, but not limited to, if we
    reasonably believe: (i) you have violated these Terms or the Twitter Rules
    . . . .” That provision was still operative in 2018, when Murphy filed her
    lawsuit. On October 2, 2017, Twitter further amended the terms of service to
    state: “We may also remove or refuse to distribute any Content on the
    Services, suspend or terminate users, and reclaim usernames without
    liability to you.” Twitter’s terms of service also state users will be notified
    “30 days in advance of making effective changes to [the terms of service] that
    8
    impact the rights or obligations of any party to these Terms,” and promises
    users that changes “will not be retroactive.”
    Murphy’s first cause of action for breach of contract alleges Twitter
    breached the express contractual terms of its user agreement by failing to
    provide notice of the changes to the Hateful Conduct Policy, including the
    “new ‘misgendering’ provision,” before enacting it, and by enforcing the
    changes against Murphy retroactively. Murphy also contends Twitter
    violated its user agreement and “the duty of good faith and fair dealing
    implicit within it” because it “targeted her for permanent suspension despite
    the fact that she never violated any of the Terms of Service, Rules, or
    incorporated policies.” She further asserts the portions of the terms of service
    purporting to give Twitter the right to suspend an account “at any time for
    any or no reason” and “without liability to you” are procedurally and
    substantively unconscionable.
    Her second cause of action for promissory estoppel alleges Twitter
    violated several clear and unambiguous promises in the user agreement, on
    its website, and in public statements, including the promise to not monitor or
    censor content, the promise to notify users of changes 30 days before they are
    made, the promise to not apply changes retroactively, a promise to reserve
    “account-level” actions, such as permanent suspensions, for repeated or
    egregious violations, and promises to treat everyone the same and not
    consider “ ‘political viewpoints, perspectives, or party affiliation in any of
    [Twitter’s] policies or enforcement decisions, period.’ ” Murphy alleges she
    and other users foreseeably and reasonably relied on the promises to their
    detriment, and that they have lost “valuable economic interests in access to
    their Twitter account[s] and their followers forever.”
    9
    Murphy’s third cause of action for violation of the UCL alleges Twitter
    committed an unfair business practice by inserting the alleged
    unconscionable provisions allowing it to suspend or ban accounts “at any time
    for any or no reason” and “without liability to you” into its terms of service,
    and that its practices are “fraudulent” within the meaning of the UCL
    because Twitter falsely “held itself out to be a free speech platform” and
    promised not to actively monitor or censor user content.
    In her complaint, Murphy seeks injunctive relief on behalf of herself
    and similarly situated individuals, requiring, among other things, that
    Twitter (1) cease and desist enforcing its “unannounced and viewpoint
    discriminatory ‘misgendering’ rule”; (2) restore accounts it had suspended or
    banned pursuant to the misgendering policy; (3) remove “unconscionable
    provisions in its terms of service purporting to give Twitter the right to
    suspend or ban an account ‘at any time for any or no reason’ and ‘without
    liability to you’ ”; and (4) issue a “full and frank correction of its false and
    misleading advertising and representations to the general public that it does
    not censor user content except in narrowly-defined, viewpoint-neutral
    circumstances . . . .” Murphy also requests a declaratory judgment that
    Twitter has breached its contractual agreements with Murphy and similarly
    situated users by taking the actions alleged in the complaint.
    Twitter filed a demurrer and special motion to strike under the anti-
    SLAPP law (Code Civ. Proc., § 425.16).2 Twitter argued that Murphy’s claims
    were barred by the immunity provided in section 230(c)(1) and the First
    Amendment to the United States Constitution. Twitter also asserted
    2 “ ‘SLAPP’ is an acronym for ‘strategic lawsuit against public
    participation.’ ” (Baral v. Schnitt (2016) 
    1 Cal.5th 376
    , 381, fn. 1.)
    10
    Murphy had failed to state a claim as to each of her causes of action under
    California law. Murphy filed an opposition and Twitter filed a reply.3
    After hearing oral argument, the trial court denied Twitter’s anti-
    SLAPP motion based on the “public interest” exception of Code of Civil
    Procedure section 425.17, subdivision (b). The court sustained Twitter’s
    demurrer without leave to amend based on section 230(c)(1) and thereafter
    entered a judgment of dismissal.
    II. DISCUSSION
    A. Communications Decency Act
    Section 230 is part of the Communications Decency Act of 1996. As our
    Supreme Court has explained, “Congress enacted section 230 ‘for two basic
    policy reasons: to promote the free exchange of information and ideas over
    the Internet and to encourage voluntary monitoring for offensive or obscene
    material.’ ” (Hassell v. Bird (2018) 
    5 Cal.5th 522
    , 534 (Hassell).) The statute
    contains express findings and policy declarations recognizing the rapid
    growth of the Internet, the beneficial effect of minimal government regulation
    on its expansion, and the twin policy goals of “promot[ing] the continued
    development of the Internet and other interactive computer services” and
    “preserv[ing] the vibrant and competitive free market that presently exists
    for the Internet and other interactive computer services, unfettered by
    Federal or State regulation.” (§ 230(a), (b).) The law was enacted in part in
    response to an unpublished New York trial court decision, Stratton Oakmont,
    Inc. v. Prodigy Services Co. (N.Y.Sup.Ct. 1995) 23 Media L.Rep. 1794 [
    1995 WL 323710
    ] (Stratton Oakmont), which held that because an operator of
    3Twitter apparently filed a reply brief in support of its demurrer, but it
    was not included in the record on appeal. The only reply brief in the record
    on appeal was Twitter’s reply in support of its special motion to strike under
    Code of Civil Procedure section 425.16.
    11
    Internet bulletin boards had taken an active role in monitoring and editing
    the content posted by third parties to the bulletin boards, it could be regarded
    as the “publisher” of material posted on them and held liable for defamation.
    (Hassell, at p. 534; Zeran v. America Online, Inc. (4th Cir. 1997) 
    129 F.3d 327
    , 331 (Zeran).)
    Section 230(c)(1), which is captioned “Treatment of publisher or
    speaker,” states: “No provider or user of an interactive computer service
    shall be treated as the publisher or speaker of any information provided by
    another information content provider.” As relevant here, the statute also
    expressly preempts any state law claims inconsistent with that provision:
    “No cause of action may be brought and no liability may be imposed under
    any State or local law that is inconsistent with this section.” (§ 230(e)(3).)
    Read together these two provisions “protect from liability (1) a provider or
    user of an interactive computer service (2) whom a plaintiff seeks to treat,
    under a state law cause of action, as a publisher or speaker (3) of information
    provided by another information content provider.” (Barnes v. Yahoo!, Inc.
    (9th Cir. 2009) 
    570 F.3d 1096
    , 1100–1101, fn. omitted (Barnes); Delfino v.
    Agilent Technologies, Inc. (2006) 
    145 Cal.App.4th 790
    , 804–805 (Delfino).)
    Two California Supreme Court cases, Hassell, supra, 
    5 Cal.5th 522
     and
    Barrett v. Rosenthal (2006) 
    40 Cal.4th 33
     (Barrett), have addressed immunity
    under section 230, discussing at length statutory interpretation and judicial
    construction of the statute.4 In both cases, our high court concluded
    4  Although Hassell was a plurality opinion, Justice Kruger, concurring
    in the judgment, agreed with the plurality’s analysis of section 230. (Hassell,
    supra, 5 Cal.5th at pp. 548, 558 (conc. opn. of Kruger, J.) [“section 230
    immunity applies to an effort to bring a cause of action or impose civil
    liability on a computer service provider that derives from its status as a
    publisher or speaker of third party content”].)
    12
    section 230 is to be construed broadly in favor of immunity. (Hassell, at
    p. 544 [“broad scope of section 230 immunity” is underscored by “inclusive
    language” of § 230(e)(3), which, “read in connection with section 230(c)(1) and
    the rest of section 230, conveys an intent to shield Internet intermediaries
    from the burdens associated with defending against state law claims that
    treat them as the publisher or speaker of third party content, and from
    compelled compliance with demands for relief that, when viewed in the
    context of a plaintiff’s allegations, similarly assign them the legal role and
    responsibilities of a publisher qua publisher”]; Barrett, at p. 39 [immunity
    provisions within § 230 “have been widely and consistently interpreted to
    confer broad immunity”].) California’s appellate courts and federal courts
    have also generally interpreted section 230 to confer broad immunity on
    interactive computer services. (See Doe II v. MySpace Inc. (2009)
    
    175 Cal.App.4th 561
    , 572 [concluding a “general consensus to interpret
    section 230 immunity broadly” could be derived from California and federal
    court cases]; Delfino, supra, 145 Cal.App.4th at p. 804; accord, Doe v.
    MySpace Inc. (5th Cir. 2008) 
    528 F.3d 413
    , 418; Carafano v. Metrospalsh.com,
    Inc. (9th Cir. 2003) 
    339 F.3d 1119
    , 1123 [“reviewing courts have treated
    § 230(c) immunity as quite robust”]; but see Barnes, 
    supra,
     570 F.3d at
    p. 1100 [text of § 230(c) “appears clear that neither this subsection nor any
    other declares a general immunity from liability deriving from third-party
    content”].)
    Murphy’s claims against Twitter satisfy all three conditions for
    immunity under section 230(c)(1). First, an “ ‘interactive computer service’ ”
    is “any information service, system, or access software provider that provides
    or enables computer access by multiple users to a computer server.”
    (§ 230(f)(2).) Twitter meets that description, as Murphy concedes. (See
    13
    American Freedom Defense Initiative v. Lynch (D.D.C. 2016) 
    217 F.Supp.3d 100
    , 104 [Twitter is an interactive computer service under the CDA]; Fields v.
    Twitter, Inc. (N.D.Cal. 2016) 
    217 F.Supp.3d 1116
    , 1121, affd. 
    881 F.3d 739
    (9th Cir. 2018); Brittain v. Twitter, Inc. (N.D.Cal. Jun. 10, 2019, No. 19-cv-
    00114-YGR) 
    2019 WL 2423375
     at p. *2.)
    Second, Murphy’s claims all seek to hold Twitter liable for requiring
    her to remove tweets and suspending her Twitter account and those of other
    users. Twitter’s refusal to allow certain content on its platform, however, is
    typical publisher conduct protected by section 230. “ ‘[Section] 230 precludes
    courts from entertaining claims that would place a computer service provider
    in a publisher’s role. Thus, lawsuits seeking to hold a service provider liable
    for its exercise of a publisher’s traditional editorial functions—such as
    deciding whether to publish, withdraw, postpone or alter content—are
    barred.’ ” (Barrett, supra, 40 Cal.4th at p. 43, quoting Zeran, 
    supra,
     129 F.3d
    at p. 330; Fair Housing Council of San Fernando Valley v. Roommates.com,
    LLC (9th Cir. 2008) 
    521 F.3d 1157
    , 1170–1171 [“any activity that can be
    boiled down to deciding whether to exclude material that third parties seek to
    post online is perforce immune under section 230”].)
    The third prong is also satisfied here, because Murphy’s claims all
    concern Twitter’s removal of or refusal to publish “information provided by
    another information content provider.” An “ ‘information content provider’ ”
    is defined as “any person or entity that is responsible, in whole or in part, for
    the creation or development of information provided through the Internet or
    any other interactive computer service.” (§ 230(f)(3).) All of the content that
    Murphy claims Twitter required her or others to remove and is wrongfully
    censoring was created and posted by Murphy and others, not Twitter. (See,
    e.g., Cross v. Facebook, Inc. (2017) 
    14 Cal.App.5th 190
    , 207 (Cross); Federal
    14
    Agency of News LLC v. Facebook, Inc. (N.D.Cal. 2020) 
    432 F.Supp.3d 1107
    ,
    1118 (Federal Agency of News) [complaint admitted Federal Agency of News
    LLC was the source of content Facebook, Inc. (Facebook) removed].)
    Murphy takes issue with both the second and third prongs of the
    section 230 test as they relate to her claims. She contends section 230(c)(1)
    cannot apply in this case because the “only information at issue is Twitter’s
    own promises,” not “ ‘information provided by another content provider,’ ” and
    because she seeks to treat Twitter not as a publisher of information provided
    by others, but as a promisor or party to a contract. Murphy relies on Barnes,
    
    supra,
     570 F.3d at page 1107 and Demetriades v. Yelp, Inc. (2014)
    
    228 Cal.App.4th 294
    , 313 (Demetriades) to argue that “[c]ourts routinely have
    held that Section 230 permits contract, promissory estoppel and consumer
    fraud claims” such as hers.
    In assessing whether a claim treats a provider as a publisher or
    speaker of user-generated content, however, courts focus not on the name of
    the cause of action, but whether the plaintiff’s claim requires the court to
    treat the defendant as the publisher or speaker of information created by
    another. (Barnes, supra, 570 F.3d at pp. 1101–1102; Cross, supra,
    14 Cal.App.5th at p. 207.) This test prevents plaintiffs from avoiding the
    broad immunity of section 230 through the “ ‘ “creative” pleading’ of barred
    claims” or using “litigation strategy . . . to accomplish indirectly what
    Congress has clearly forbidden them to achieve directly.” (Hassell, supra,
    5 Cal.5th at pp. 542, 541.)
    Courts have routinely rejected a wide variety of civil claims like
    Murphy’s that seek to hold interactive computer services liable for removing
    or blocking content or suspending or deleting accounts (or failing to do so) on
    15
    the grounds they are barred by the CDA.5 (See, e.g., Doe II v. MySpace Inc.,
    supra, 175 Cal.App.4th at p. 573 [§ 230 immunity barred tort claims based on
    social networking website’s decisions whether “to restrict or make available”
    minors’ profiles]; Gentry v. eBay, Inc. (2002) 
    99 Cal.App.4th 816
    , 834–836
    [rejecting negligence and UCL claims under § 230 and California law based
    on auction website’s compilation of and failure to withdraw allegedly false
    and misleading information provided by individual sellers on its website];
    Wilson v. Twitter (S.D.W.Va. May 1, 2020, No. 3:20-cv-00054) 
    2020 WL 3410349
     at pp. *1, *12 [plaintiff’s claims seeking to hold Twitter liable for
    deleting posts and suspending account based on hateful conduct policy barred
    by § 230(c)(1)], report and recommendation adopted, Wilson v. Twitter
    (S.D.W.Va. June 16, 2020, No. 3:20-cv-00054) 
    2020 WL 3256820
    ; Domen v.
    Vimeo, Inc. (S.D.N.Y. 2020) 
    433 F.Supp.3d 592
    , 602–603 [defendant entitled
    to immunity under § 230(c)(1) because plaintiffs sought to treat defendant as
    a “ ‘publisher’ ” for deleting plaintiffs’ content on its website]; Ebeid v.
    Facebook, Inc. (N.D.Cal. May 9, 2019, No. 18-cv-07030-PJH) 
    2019 WL 2059662
     at pp. *3–*5 [removal of plaintiff’s Facebook posts and restrictions
    on his use of his account constituted “publisher activity” protected by § 230];
    5 Courts have found immunity for interactive computer services under
    section 230 regardless of whether the provider is alleged to have improperly
    removed objectionable content or failed to remove such content. “[N]o logical
    distinction can be drawn between a defendant who actively selects
    information for publication and one who screens submitted material,
    removing offensive content. ‘. . . the difference is one of method or degree, not
    substance.’ ” (Barrett, supra, 40 Cal.4th at p. 62; see Barnes, 
    supra,
     570 F.3d
    at p. 1102, fn. 8 [it is immaterial whether service provider’s exercise of
    publisher’s traditional editorial functions “comes in the form of deciding what
    to publish in the first place or what to remove among the published
    material”]; Fyk v. Facebook, Inc. (9th Cir. 2020) 
    808 Fed.Appx. 597
    , 597,
    fn. 2.)
    16
    Mezey v. Twitter, Inc. (S.D.Fla. Jul. 19, 2018, No. 1:18-cv-21069-KMM) 
    2018 WL 5306769
    , at pp. *1–*2 [lawsuit alleging Twitter “unlawfully suspended
    [plaintiff’s] Twitter account” dismissed on grounds of § 230(c)(1) immunity];
    Fields v. Twitter, Inc. (N.D.Cal. 2016) 
    217 F.Supp.3d 1116
    , 1123–1125, affd.
    
    881 F.3d 739
     (9th Cir. 2018) [holding provision of Twitter accounts to alleged
    terrorists was publishing activity immunized by § 230]; Sikhs for Justice
    “SFJ”, Inc. v. Facebook, Inc. (N.D.Cal. 2015) 
    144 F.Supp.3d 1088
    , 1092–1093
    [rejecting claim by human rights organization against Facebook, Inc. for
    blocking access to its website for discriminatory reasons at the request of
    government of India]; Riggs v. MySpace, Inc. (9th Cir. 2011) 
    444 Fed.Appx. 986
    , 987 [district court properly dismissed claims arising from service
    provider’s decisions to delete plaintiff’s user profiles].)
    While Murphy is correct that some courts have rejected the application
    of section 230 immunity to certain breach of contract and promissory estoppel
    claims, many others have concluded such claims were barred because the
    plaintiff’s cause of action sought to treat the defendant as a publisher or
    speaker of user generated content. (See, e.g., Cross, supra, 14 Cal.App.5th at
    pp. 206–207 [plaintiffs’ breach of contract claim barred by CDA]; Federal
    Agency of News, supra, 432 F.Supp.3d at pp. 1119–1120 [plaintiffs’ causes of
    action for breach of contract and breach of the implied covenant of good faith
    and fair dealing prohibited by § 230]; Brittain v. Twitter, Inc., supra, 
    2019 WL 2423375
     at pp. *3–*4 [dismissing, among others, causes of action for
    breach of contract and promissory estoppel]; King v. Facebook, Inc. (N.D.Cal.
    Sept. 5, 2019, No. 19-cv-01987-WHO) 
    2019 WL 4221768
     at pp. *1, fn. 1, *3–*5
    [dismissing breach of contract, promissory estoppel, and UCL claims under
    § 230(c)(1)]; Caraccioli v. Facebook, Inc. (N.D.Cal. 2016) 
    167 F.Supp.3d 1056
    ,
    1061, 1064–1066 [dismissing breach of contract and UCL claims under
    17
    § 230]; Goddard v. Google, Inc. (N.D.Cal. 2009) 
    640 F.Supp.2d 1193
    , 1199,
    1201 (Goddard) [dismissing breach of contract claim based on search engine’s
    failure to abide by terms of its content policy].)
    Murphy relies heavily on Barnes, 
    supra,
     
    570 F.3d 1096
     to argue the
    viability of her claims, but that case is distinguishable. In Barnes, the Ninth
    Circuit considered whether an Internet service provider was liable for its
    failure to remove material harmful to the plaintiff but failed to do so.
    Plaintiff Barnes had broken off a long relationship with her boyfriend, who
    then posted unauthorized false profiles of her to Yahoo, Inc.’s (Yahoo)
    website, soliciting sex. She repeatedly demanded that Yahoo remove the
    false profiles, but the company did not respond. The day before a local news
    story about the incident was to be broadcast, Yahoo’s director of
    communications called Barnes and told her she would “ ‘personally walk the
    statements over to the division responsible for stopping unauthorized profiles
    and they would take care of it.’ ” (Id. at pp. 1098–1099.) Barnes relied on
    Yahoo’s promise to remove the content, but Yahoo still did not take down the
    profiles until after Barnes filed her lawsuit. (Id. at p. 1099.)
    Barnes filed a complaint against Yahoo for negligent undertaking and
    promissory estoppel. (Barnes, supra, 570 F.3d at p. 1099.) The court
    concluded Barnes’s negligent undertaking claim was barred by
    section 230(c)(1) because “removing content is something publishers do,” and
    the “duty that Barnes claims Yahoo violated derives from Yahoo’s conduct as
    a publisher—the steps it allegedly took, but later supposedly abandoned, to
    de-publish the offensive profiles.” (Barnes, at p. 1103.) By contrast, the court
    held, her promissory estoppel claim was not precluded because “the duty the
    defendant allegedly violated springs from a contract—an enforceable
    promise—not from any non-contractual conduct or capacity of the defendant.
    18
    [Citation.] Barnes does not seek to hold Yahoo liable as a publisher or
    speaker of third-party content, but rather as the counter-party to a contract,
    as a promisor who has breached.” (Id. at p.1107.)
    Barnes never suggested, however, that all contract or promissory
    estoppel claims survive CDA immunity. (See, e.g., Cross, supra,
    14 Cal.App.5th at p. 207 [noting Barnes directs us to look beyond the name of
    the cause of action and examine whether the duty plaintiff alleges defendant
    violated derives from defendant’s status as a publisher to determine whether
    § 230(c)(1) precludes liability]; Goddard, supra, 640 F.Supp.2d at p. 1200
    [“Read as broadly as possible, Barnes stands for the proposition that when a
    party engages in conduct giving rise to an independent and enforceable
    contractual obligation, that party may be ‘h[eld] . . . liable . . . as a counter-
    party to a contract, as a promisor who has breached.’ ”].) Unlike in Barnes,
    where the plaintiff sought damages for breach of a specific personal promise
    made by an employee to ensure specific content was removed from Yahoo’s
    website, the substance of Murphy’s complaint accuses Twitter of unfairly
    applying its general rules regarding what content it will publish and seeks
    injunctive relief to demand that Twitter restore her account and refrain from
    enforcing its Hateful Conduct Policy. Murphy does not allege someone at
    Twitter specifically promised her they would not remove her tweets or would
    not suspend her account. Rather, Twitter’s alleged actions in refusing to
    publish and banning Murphy’s tweets, as the trial court in this case observed,
    “reflect paradigmatic editorial decisions not to publish particular content”
    that are protected by section 230.
    Indeed, the Barnes court itself recognized a difference between the type
    of allegations Murphy makes here and the specific promise alleged in that
    case. Noting that “as a matter of contract law, the promise must ‘be as clear
    19
    and well defined as a promise that could serve as an offer, or that otherwise
    might be sufficient to give rise to a traditional contract supported by
    consideration,” the Barnes court explained that “a general monitoring policy
    . . . does not suffice for contract liability.” (Barnes, 
    supra,
     570 F.3d at
    p. 1108.) Here, Murphy’s allegations that Twitter “enforced its Hateful
    Conduct Policy in a discriminatory and targeted manner” against Murphy
    and others by removing her tweets and suspending her account amount to
    attacks on Twitter’s interpretation and enforcement of its own general
    policies rather than breach of a specific promise.6
    We likewise are not persuaded by Murphy’s reliance on Demetriades,
    supra, 
    228 Cal.App.4th 294
    . There, a restaurant operator sought an
    injunction under the UCL to prevent Yelp, Inc. (Yelp) from making claims
    6  Although Murphy also points to the allegations that Twitter failed to
    give her 30 days’ notice of the changes to the Hateful Conduct Policy and that
    Twitter applied its new policy retroactively as breaches of clear and well-
    defined promises, the gravamen of each of her causes of action concerns
    Twitter’s editorial decisions not to publish content—as reflected by the fact
    that she alleges no specific injury from the alleged notice and retroactivity
    violations but complains instead of the harm caused by Twitter’s ban on her
    and others’ free speech rights. (See, e.g., Noah v. AOL Time Warner, Inc.
    (E.D.Va. 2004) 
    261 F.Supp.2d 532
    , 538, affd. 
    2004 WL 602711
     (4th Cir. 2004)
    [CDA barred discrimination claim because examination of injury plaintiff
    claimed and remedy he sought clearly indicated his claim sought to place
    defendant in publisher’s role].) Moreover, Twitter’s Hateful Conduct Policy
    as adopted in December 2017, before Murphy posted any of her allegedly
    offending tweets, broadly disallowed “behavior that harasses individuals,”
    and expressly proscribed “directly attack[ing] . . . other people on the basis of
    . . . gender identity” and “repeated and/or non-consensual slurs, epithets,
    racist and sexist tropes, or other content that degrades someone.” We fail to
    see how Twitter’s amendment of its policy in October 2018 to include
    “targeted misgendering or deadnaming” as illustrative examples of such
    behavior constitute a change altering any party’s “rights or obligations”
    sufficient to give rise to a breach of contract claim based on Twitter’s user
    agreement.
    20
    about the accuracy of its program that filters restaurant reviews.
    (Demetriades, at pp. 299–300.) Yelp filed a motion to strike the complaint
    under Code of Civil Procedure section 425.16 (anti-SLAPP motion) and the
    court concluded the commercial speech exception under Code of Civil
    Procedure section 425.17 applied to Yelp’s statements about the accuracy and
    efficacy of its filter. (Demetriades, at pp. 305, 310.) The opinion contains very
    little discussion of the CDA, but the court determined Yelp was not immune
    under section 230 because the plaintiff sought to hold Yelp liable for its own
    “specific and detailed” factual statements about the accuracy of its filter.
    (Demetriades, at pp. 311, 313.) Here, the gravamen of Murphy’s complaint
    seeks to hold Twitter liable, not for specific factual representations it made,
    but for enforcing its Hateful Conduct Policy against her and exercising its
    editorial discretion to remove content she had posted on its platform.7
    Nor are we persuaded by Murphy’s argument that section 230
    immunity does not apply here because the content at issue is Twitter’s own
    promises rather than content generated by Murphy and other users. Courts
    have repeatedly determined that when plaintiffs allege a platform has
    wrongfully ceased publishing their posts or blocked content, that content
    constitutes “information provided by another information content provider”
    7 Murphy also cites Joude v. WordPress Foundation (N.D.Cal. Jul. 3,
    2014, No. C14-01656 LB) 
    2014 WL 3107441
    , to support her contract claim,
    but that case is particularly unpersuasive. Murphy quotes the court’s
    statement that “claim two is a contract claim that—if viable—would be a
    state law claim that would not be barred by section 230,” but the Joude court
    had analyzed in great detail how the plaintiffs failed to allege a cognizable
    cause of action for breach of contract based on the provider’s terms of service,
    noted it could not “conceive of an amendment that would cure the
    shortcomings,” and expressly stated the “claim looks to be a claim that ought
    to be made against the blogger, however, not [the interactive service
    provider].” (Joude, at pp. *8, *6.)
    21
    within the meaning of section 230(c)(1). (See Sikhs for Justice “SFJ”, Inc. v.
    Facebook, Inc., supra, 144 F.Supp.3d at pp. 1090, 1093–1094 [explaining that
    § 230(c)(1)’s reference to “another information content provider” refers to an
    interactive computer service provider that passively displays third party
    content as opposed to one that participates in creating or developing it];
    Federal Agency of News, supra, 432 F.Supp.3d at pp. 1117–1118 [defendant
    satisfied “another information content provider” prong of § 230 immunity test
    where plaintiffs sought to hold defendant liable for removing plaintiffs’
    account, posts, and content].)
    Division Two of this court rejected a claim similar to Murphy’s in Cross,
    supra, 14 Cal.App.5th at pages 206–207. There, the plaintiff argued that
    portions of statements made in “Facebook’s terms and community standards”
    were “ ‘representations of fact’ made by Facebook” and that section 230 did
    not apply to the breach of contract and negligence claims because the
    complaint “ ‘specifically allege[d] that Facebook is liable because of its own
    promises and representations to [plaintiff], not because of anyone else’s
    statements.’ ” (Id. at pp. 203, 206.)
    The Cross court rejected the plaintiff’s argument that Facebook was
    liable for failing “ ‘to adhere to its own legally enforceable promise.’ ” (Cross,
    supra, 14 Cal.App.5th at p. 207.) Citing Barnes, it explained that “[i]n
    evaluating whether a claim treats a provider as a publisher or speaker of
    user-generated content, ‘what matters is not the name of the cause of
    action,’ ” but “ ‘whether the cause of action inherently requires the court to
    treat the defendant as the “publisher or speaker” of content provided by
    another.’ ” (Cross, at p. 207.) The court then cited numerous state and
    federal court decisions that have found claims based on a failure to remove
    content posted by others were barred by the CDA. (Cross, at p. 207.)
    22
    We agree with the Cross court’s analysis, and find it equally applicable
    in this case. Murphy argues Cross is distinguishable because it involved
    liability for a service provider’s failure to remove third party content, while
    she seeks to hold Twitter liable for its own promises and representations.
    But, as we have discussed, that is the very same argument the Cross court
    considered and rejected, i.e., that Twitter can be held liable based on
    promises and representations in its general terms of service. (Cross, supra,
    14 Cal.App.5th at p. 207.) Murphy also notes the plaintiff in Cross failed to
    “ ‘identif[y] any “representation of fact” that Facebook would remove any
    objectionable content,’ ” whereas she identified several terms in the user
    agreement. But Murphy does not identify any specific representation of fact
    or promise by Twitter to Murphy that it would not remove her tweets or
    suspend her account beyond general statements in its monitoring policy, the
    type of allegation the Barnes court noted would be insufficient to state a
    claim. (Barnes, 
    supra,
     570 F.3d at p. 1108.)
    Murphy also urges us to conclude that the trial court erred because
    section 230(c)(2), not section 230(c)(1), governs Twitter’s actions suspending
    her account and removing her tweets from its platform.8 Section 230(c)(2)
    provides, in relevant part: “No provider or user of an interactive computer
    service shall be held liable on account of— [¶] (A) any action voluntarily
    taken in good faith to restrict access to or availability of material that the
    provider or user considers to be obscene, lewd, lascivious, filthy, excessively
    violent, harassing, or otherwise objectionable, whether or not such material is
    constitutionally protected.” Murphy contends the clear text of the statute,
    8 Twitter expressly confirmed at the trial court hearing that it was not
    relying on section 230(c)(2) for purposes of the demurrer to Murphy’s
    complaint.
    23
    the canon of statutory interpretation against surplusage, and our Supreme
    Court’s decision in Barrett, 
    supra,
     
    40 Cal.4th 33
    , all preclude Twitter from
    relying on section 230(c)(1), as opposed to section 230(c)(2), to defend its
    removal of allegedly harassing content.
    We are not convinced. As the Ninth Circuit explained in Barnes,
    section 230(c)(1) “shields from liability all publication decisions, whether to
    edit, to remove, or to post, with respect to content generated entirely by third
    parties.” (Barnes, supra, 570 F.3d at p. 1105, italics added.) Section
    230(c)(2), on the other hand, applies “not merely to those whom subsection
    (c)(1) already protects, but any provider of an interactive computer service”
    regardless of whether the content at issue was created or developed by third
    parties. (Barnes, at p. 1105.) Thus, section 230(c)(2) “provides an additional
    shield from liability,” encompassing, for example, those interactive computer
    service providers “who cannot take advantage of subsection (c)(1) . . . .
    because they developed, even in part, the content at issue.” (Barnes, at
    p. 1105, italics added.)
    The Ninth Circuit recently reiterated that analysis in Fyk v. Facebook,
    supra, 
    808 Fed.Appx. 597
     (Fyk), holding that section 230(c)(1) immunized the
    interactive computer service based on its alleged “de-publishing” and “re-
    publishing” of user content. (Fyk, at pp. 597–598.) The Ninth Circuit
    explicitly rejected the argument that applying section 230(c)(1) immunity to
    such decisions renders section 230(c)(2) “mere surplusage.” (Fyk, at p. 598.)
    Other federal courts are in agreement.9 (See Domen v. Vimeo, Inc., supra,
    9 Murphy cites Zango, Inc. v. Kaspersky Lab, Inc. (9th Cir. 2009)
    
    568 F.3d 1169
    , 1175 and e-ventures Worldwide, LLC v. Google, Inc. (M.D.Fla.
    Feb. 8, 2017, No. 2:14-cv-646-FtM-PAM-CM) 
    2017 WL 2210029
     in support of
    her argument that section 230(c)(1) and (2) “apply to different concerns.”
    Zango, however, is inapposite because it did not address whether
    24
    433 F.Supp.3d at p. 603 [applying § 230(c)(1) to a platform’s decision to refuse
    to publish content does not render § 230(c)(2)’s good faith requirement
    surplusage because “[s]ection 230(c)(2)’s grant of immunity, while
    ‘overlapping’ with that of [s]ection 230(c)(1), [citation], also applies to
    situations not covered by [s]ection 230(c)(1)”]; Force v. Facebook, Inc. (2d Cir.
    2019) 
    934 F.3d 53
    , 79–80 (conc. opn. of Katzmann, C. J.) [noting § 230(c)(2)
    provides for much narrower civil liability than broad liability under
    § 230(c)(1), which covers publishers’ decisions regarding content removal].)
    Contrary to Murphy’s argument, this interpretation of the statutory
    scheme is not inconsistent with our Supreme Court’s analysis in Barrett.
    Barrett did not involve content restrictions by an interactive computer service
    provider, and thus the Supreme Court had no occasion to hold that only
    section 230(c)(2) applies to such claims. Rather, the court considered whether
    section 230 confers immunity not only on “ ‘publishers,’ ” but also common
    law “ ‘distributors’ ” of allegedly defamatory material. (Barrett, 
    supra,
    40 Cal.4th at p. 39.) In its discussion of section 230(c)(1) and (2), the
    section 230(c)(1) immunity applies to decisions to remove or block content,
    but whether a company that provided malware-blocking software to users
    was protected from liability under section 230(c)(2)(B) for blocking access to
    the plaintiff’s programs. (Zango, at pp. 1174–1175 [“this case presents a
    different problem, and a statutory provision with a different aim, from ones
    we have encountered before”].) While e-Ventures supports Murphy’s
    argument here, we find it unpersuasive because the court did not consider
    the distinction recognized by courts in the Ninth Circuit between a publisher
    that is involved in the creation of content and one who only makes
    publication decisions regarding content created by others. We agree with the
    Ninth Circuit’s analysis in Barnes and Fyk that reading section 230(c)(1) to
    cover all publication decisions with respect to content generated entirely by
    third parties does not render section 230(c)(2) mere surplusage, because that
    section provides an additional shield from liability when interactive
    computer services play some role in the creation or development of content.
    (Barnes, supra, 570 F.3d at p. 1105; Fyk, supra, 808 Fed.Appx. at p. 598.)
    25
    Supreme Court disagreed with the Court of Appeal’s reasoning that
    section 230(c)(2) would be superfluous if “ ‘publishers’ ” and “ ‘distributors’ ”
    of third party content both had broad immunity under section 230(c)(1).
    (Barrett, at pp. 48–49.) Noting that the “terms of section 230(c)(1) are broad
    and direct,” the court rejected an interpretation of the CDA that assumed
    Congress intended to immunize “ ‘publishers’ ” but leave “ ‘distributors’ ”
    vulnerable to liability.10 (Barrett, at pp. 47–48.)
    Importantly, Barrett also expressly agreed with the Zeran court’s
    analysis of section 230 immunity and its construction of the term “publisher.”
    (Barrett, 
    supra,
     40 Cal.4th at pp. 41–42, 48–49.) As discussed above,
    Murphy’s claims here fall squarely within Zeran’s holding that “lawsuits
    seeking to hold a service provider liable for its exercise of a publisher’s
    traditional editorial functions—such as deciding whether to publish,
    withdraw, postpone or alter content—are barred.” (Zeran, supra, 129 F.3d at
    p. 330.)
    Moreover, the Barrett court acknowledged in its discussion of the
    legislative history of section 230 that the statute was “enacted to remove the
    disincentives to self-regulation created by [Stratton Oakmont, supra,
    10 We acknowledge some ambiguity in our high court’s statements that
    section 230(c)(1) and (2) “address different concerns” and that
    section 230(c)(1) “is concerned with liability arising from information
    provided online,” while section 230(c)(2) is “directed at actions taken by
    Internet service providers or users to restrict access to online information.”
    (Barrett, 
    supra,
     40 Cal.4th at p. 49.) Thus, “[s]ection 230(c)(1) provides
    immunity from claims by those offended by an online publication, while
    section 230(c)(2) protects against claims by those who might object to the
    restriction of access to an online publication.” (Ibid.) We disagree with
    Murphy, however, that the statements suggest only section 230(c)(2) applies
    to her claims. Rather, we agree with Twitter and amicus curiae that the
    passage is dicta, and read in context of our high court’s broader analysis and
    specific holdings, conclude it cannot save Murphy’s claims here.
    26
    23 Media L.Rep. 1794 [
    1995 WL 323710
    ] ], in which a service provider was
    held liable as a primary publisher because it actively screened and edited
    messages posted on its bulletin boards.” (Barrett, 
    supra,
     40 Cal.4th at p. 51.)
    The court recognized that comments made by Representative Cox, one of the
    two sponsors of section 230, meant that distributors would be “protected from
    rather than threatened with liability, to encourage responsible screening of
    the content provided on their services.” (Barrett, at p. 53.) It further
    explained that both the “terms of section 230(c)(1) and the comments of
    Representative Cox reflect the intent to promote active screening by service
    providers of online content provided by others.” (Barrett, at p. 53, italics
    added.) “Congress contemplated self-regulation, rather than regulation
    compelled at the sword point of tort liability.” (Ibid.)
    Finally, we reject Murphy’s arguments that the superior court’s ruling
    frustrates section 230’s stated policy purposes. As our Supreme Court
    explained in Barrett, one of the key purposes of section 230 “was ‘to
    encourage service providers to self-regulate the dissemination of offensive
    material over their services.’ [Citation.] . . . ‘Fearing that the specter of
    liability would . . . deter service providers from blocking and screening
    offensive material, Congress enacted § 230’s broad immunity,’ which ‘forbids
    the imposition of publisher liability on a service provider for the exercise of
    its editorial and self-regulatory functions.’ ” (Barrett, 
    supra,
     40 Cal.4th at
    p. 44, fn. omitted.) Based on the clear statutory objectives as defined by
    Congress in the statute itself and our own Supreme Court’s guidance, we
    conclude the superior court’s ruling does not interfere with section 230’s twin
    policy goals of promoting free and open online discourse while encouraging
    Internet computer services to engage in active monitoring of offensive
    content. (Hassell, supra, 5 Cal.5th at p. 534.)
    27
    In sum, the trial court properly sustained Twitter’s demurrer without
    leave to amend because each of Murphy’s claims is barred under
    section 230(c)(1).
    B. Failure to State a Claim
    Even assuming, however, that section 230(c)(1) immunity does not
    apply, we would affirm the trial court’s judgment because Murphy failed to
    state a cognizable cause of action.
    1. Breach of Contract
    Murphy’s cause of action for breach of contract is based on Twitter’s
    failure to provide notice of changes to its Hateful Conduct Policy, its
    retroactive enforcement of the policy, and its breach of the covenant of good
    faith and fair dealing because it “targeted her for permanent suspension
    despite the fact that she never violated any of the Terms of Service, Rules or
    incorporated policies.”
    Her claim necessarily fails, however, because Twitter’s terms of service
    expressly state that they reserve the right to “suspend or terminate [users’]
    accounts . . . for any or no reason” without liability. (See Cox v. Twitter
    (D.S.C. 2019) 
    2019 WL 2513963
     at p. *4 [plaintiff’s breach of contract claim
    against Twitter for violation of its terms of service in removing content and
    suspending his account were clearly barred by terms of user agreement];
    Ebeid v. Facebook, Inc., supra, 
    2019 WL 2059662
     at p. *8 [breach of implied
    covenant of good faith and fair dealing failed because Facebook had
    contractual right to remove any post at Facebook’s sole discretion].) Waivers
    of liability that are “ ‘clear, unambiguous and explicit’ ” bar claims that
    expressly fall within their scope. (Cohen v. Five Brooks Stable (2008)
    
    159 Cal.App.4th 1476
    , 1485.) The clear terms of Twitter’s user agreement
    28
    preclude a claim for breach of contract based on the allegations of Murphy’s
    complaint.
    Murphy does not dispute that her claims fall within the scope of
    Twitter’s liability waiver provision but contends that the terms are
    unconscionable and that the court should strike those provisions and enforce
    the remainder of the contract. Murphy fails, however, to allege any facts
    demonstrating the terms are procedurally or substantively unconscionable.
    A contract term may be substantively unconscionable if it is
    “ ‘ “ ‘ “overly harsh” ’ ” [citation], “ ‘unduly oppressive’ ” [citation], “ ‘so one-
    sided as to “shock the conscience” ’ ” [citation], or “unfairly one-sided.” ’ ”
    (Baltazar v. Forever 21, Inc. (2016) 
    62 Cal.4th 1237
    , 1244.) Murphy alleges
    Twitter’s terms of service are substantively unconscionable because Twitter
    could use them to suspend or terminate user accounts for petty, arbitrary,
    irrational, discriminatory, or unlawful reasons. Terms allowing service
    providers to “discontinue service, or remove content unilaterally,” however,
    are routinely found in standardized agreements and enforced by courts.
    (Song fi, Inc. v. Google, Inc. (D.D.C. 2014) 
    72 F.Supp.3d 53
    , 63 (Song fi, Inc.).)
    “Unless there is some evidence of ‘egregious’ tactics, . . . ‘the party seeking to
    avoid the contract will have to show that the terms are so extreme as to
    appear unconscionable according to the mores and business practices of the
    time and place.’ ” (Ibid.)
    Courts have also recognized service providers that offer free services to
    Internet users may have a legitimate commercial need to limit their liability
    and have rejected claims that such limitations are so one-sided as to be
    substantively unconscionable. (See Lewis v. YouTube, LLC (2015)
    
    244 Cal.App.4th 118
    , 125–126 [limitation of liability clauses “are appropriate
    when one party is offering a service for free to the public”; plaintiff failed to
    29
    state a claim for breach of contract because YouTube, LLC’s (YouTube) terms
    of service limited its liability for deleting her video]; Darnaa, LLC v. Google,
    LLC (9th Cir. 2018) 
    756 Fed.Appx. 674
    , 676 [YouTube’s terms of service not
    substantively unconscionable because it offers video streaming services at no
    cost to user]; see also Song fi, Inc., supra, 72 F.Supp.3d at pp. 63–64 [finding
    YouTube had legitimate commercial need to include forum selection clause].)
    In light of Murphy’s allegations that Twitter provides its services to millions
    of users around the world for free, the contract term allowing it to suspend or
    terminate users’ accounts for any or no reason, absent other factual
    allegations of unfairness, does not shock the conscience or appear unfairly
    one-sided.
    Murphy makes other conclusory allegations of substantive
    unconscionability based on quotes from an arbitration case, Sanchez v.
    Valencia Holding Co., LLC (2015) 
    61 Cal.4th 899
    , 911, asserting Twitter’s
    terms of service are “ ‘unreasonably favorable to the more powerful party’ ”
    and “ ‘unfairly one-sided.’ ” She further alleges, “The terms purporting to
    give Twitter the right to suspend or ban an account ‘at any time for any or no
    reason’ and ‘without liability to you’ ‘contravene the public interest or public
    policy,’ ‘attempt to alter in an impermissible manner fundamental duties
    otherwise imposed by the law,’ ‘seek to negate the reasonable expectations of
    the nondrafting party,’ and impose ‘unreasonably and unexpectedly harsh
    terms having nothing to do with . . . central aspects of the transaction.’ ” But
    Murphy offers no specific factual allegations in support of these contentions.
    In her reply brief on appeal, Murphy relies on In re Yahoo! Customer
    Data Sec Breach Litigation (N.D.Cal. 2018) 
    313 F.Supp.3d 1113
    , 1138, in
    support of her unconscionability argument, but that case does not assist her.
    In In re Yahoo!, the court concluded the plaintiffs had sufficiently pleaded a
    30
    claim for substantive unconscionability “[u]nder the particular circumstances
    of this case,” where they alleged the defendants had legal obligations under
    both state and federal law to maintain acceptable levels of data security, the
    plaintiffs suffered actual damages, and the defendants were better equipped
    to maintain secure systems than individual email users, resulting in an
    allocation of risk that was unreasonable and unexpected. (Id. at pp. 1137–
    1138.) Murphy made no similar allegations of legal obligations on the part of
    Twitter, or of damages suffered by Murphy and other users, with respect to
    Twitter’s waiver of liability for removing content and suspending accounts.
    Murphy also relies on a Pennsylvania case applying California law,
    Bragg v. Linden Research, Inc. (E.D.Pa. 2007) 
    487 F.Supp.2d 593
    , 595–596,
    which is also distinguishable. In Bragg, a Pennsylvania federal court
    applying California law concluded that an arbitration agreement was
    substantively unconscionable because the terms of service provided the
    defendant “with a variety of one-sided remedies to resolve disputes, while
    forcing its customers to arbitrate any disputes with [it].” (Id. at p. 608.)
    Specifically, the company seized the plaintiff’s account, retained funds he had
    paid them, then told him he could resolve the dispute by initiating a costly
    arbitration process. (Ibid.) The court also concluded the defendant’s
    retention of the unilateral right to modify the arbitration agreement
    evidenced a lack of mutuality supporting a finding of substantive
    unconscionability, found other terms of the arbitration agreement unfair, and
    concluded the contract was procedurally unconscionable based on both its
    adhesive nature and surprise. (Id. at pp. 606–611.) No similar facts are
    alleged here.
    Murphy has also failed to allege facts demonstrating the terms were
    procedurally unconscionable. While the contract is an adhesion contract,
    31
    such contracts are “ ‘indispensable facts of modern life that are generally
    enforced.’ ” (Baltazar v. Forever 21, Inc., supra, 62 Cal.4th at p. 1244; see
    AT&T Mobility LLC v. Concepcion (2011) 
    563 U.S. 333
    , 346–347 [“the times
    in which consumer contracts were anything other than adhesive are long
    past”].) The fact that Murphy had no opportunity to negotiate the terms of
    service, standing alone, is insufficient to plead a viable unconscionability
    claim. (Gatton v. T-Mobile USA, Inc. (2007) 
    152 Cal.App.4th 571
    , 585 [“use of
    a contract of adhesion establishes a minimal degree of procedural
    unconscionability notwithstanding the availability of market alternatives. If
    the challenged provision does not have a high degree of substantive
    unconscionability, it should be enforced.”]; see Sweet v. Google, Inc. (N.D.Cal.
    Mar. 7, 2018, No. 17-cv-03953-EMC) 
    2018 WL 1184777
     at p. *5 [“Even if the
    contract(s) at issue were deemed adhesive, that simply establishes a minimal
    degree of procedural unconscionability such that [the plaintiff] would have to
    establish a fair amount of substantive unconscionability in order to
    prevail.”].) Murphy does not allege any facts demonstrating surprise, such as
    “the ‘terms of the bargain [being] hidden in a prolix printed form’ or pressure
    to hurry and sign.” (De La Torre v. CashCall, Inc. (2018) 
    5 Cal.5th 966
    , 983.)
    Accordingly, we conclude she had pleaded facts showing only a minimal
    degree of procedural unconscionability. Murphy’s failure to plead any other
    facts supporting her unconscionability analysis and her failure to
    demonstrate an ability to amend to plead a cognizable cause of action, leads
    us to affirm the trial court on this additional ground.
    2. Promissory Estoppel
    A claim for promissory estoppel requires (1) a promise clear and
    unambiguous in its terms, (2) reliance by the party to whom the promise is
    made, (3) the reliance must be reasonable and foreseeable, and (4) the party
    32
    asserting the estoppel must be injured by his or her reliance. (Flintco Pacific,
    Inc. v. TEC Management Consultants, Inc. (2016) 
    1 Cal.App.5th 727
    , 734.)
    Murphy alleges she relied on the following “clear and unambiguous” promises
    by Twitter: (1) the “Twitter Rules” applicable when Murphy joined stated,
    “ ‘we do not actively monitor user’s content and will not censor user content,’
    except in limited circumstances such as impersonation, violation of
    trademark or copyright, or ‘direct specific threats of violence against others’ ”;
    (2) Twitter’s terms of service promised Murphy she would receive 30 days’
    notice of any changes to its policies; (3) Twitter’s terms of service stated it
    would not enforce its policies retroactively against her; (4) Twitter’s
    “Enforcement Guidelines” stated that “ ‘account-level’ actions” are reserved
    “for cases where ‘a person has violated the Twitter Rules’ ” repeatedly or in a
    particularly egregious way; (5) Twitter’s “Safety page” states, “ ‘We treat
    everyone equally: the same Twitter Rules apply to all’ and ‘You have the
    right to express yourself on Twitter if you adhere to these rules’ ”; and
    (6) Twitter’s CEO, Jack Dorsey, testified before Congress that Twitter does
    not “ ‘consider political viewpoints, perspectives, or party affiliation in any of
    our policies or enforcement decisions, period.’ ” Murphy alleges she
    reasonably and foreseeably relied on these promises to her detriment, and
    has lost valuable economic interests in access to her Twitter account and
    followers forever.
    None of these alleged promises, however, suffice to state a claim
    because Murphy could not reasonably rely on promises that Twitter would
    not restrict access to her account, “ ‘censor’ ” her content, or take “account-
    level” action when the terms of service stated at all relevant times that
    Twitter could “ ‘remove or refuse to distribute any Content’ ” and could
    suspend or terminate an account “ ‘for any or no reason.’ ” “ ‘ “[W]hether a
    33
    party’s reliance was justified may be decided as a matter of law if reasonable
    minds can come to only one conclusion based on the facts.” ’ ” (Daniels v.
    Select Portfolio Servicing, Inc. (2016) 
    246 Cal.App.4th 1150
    , 1179; Joffe v.
    City of Huntington Park (2011) 
    201 Cal.App.4th 492
    , 513.) No plausible
    reading of the six contract terms and general statements Murphy identifies
    promises users that they will not have their content removed nor lose access
    to their account based on what they post online. Indeed, the very terms of
    service that Murphy relies on in asserting her claims make clear that Twitter
    may suspend or terminate an account for any or no reason. (See Malmstrom
    v. Kaiser Aluminum & Chemical Corp. (1986) 
    187 Cal.App.3d 299
    , 318–319
    [rejecting promissory estoppel claim because reliance on representations
    contradicting written agreement is not reasonable].)
    Moreover, the operative Hateful Conduct Policy in place in
    December 2017, when Murphy began posting her deleted tweets, stated that
    Twitter “do[es] not tolerate behavior that harasses, intimidates, or uses fear
    to silence another person’s voice,” and offered examples of intolerable
    behavior “includ[ing], but not limited to,” “repeated and/or non-consensual
    slurs, . . . sexist tropes, or other content that degrades someone.” The
    breadth and ambiguity of this prohibition makes any reliance on vague
    statements that Twitter will not “censor” content unreasonable. Because
    Murphy has not alleged Twitter ever made a specific representation directly
    to her or others that they would not remove content from their platform or
    deny access to their accounts, but rather expressly reserved the right to
    remove content, including content they determine is harassing or intolerable,
    and suspend or terminate accounts “for any or no reason” in its terms of
    service, Murphy cannot plead reasonable reliance on the alleged promises as
    a matter of law.
    34
    3. Violation of the UCL
    Murphy’s claim for violation of the UCL alleges that Twitter engaged in
    unfair and fraudulent business practices.
    As an initial matter, Murphy’s UCL claim fails for lack of standing
    because she does not allege a “loss or deprivation of money or property
    sufficient to qualify as injury in fact, i.e., economic injury.” (Kwikset Corp. v.
    Superior Court (2011) 
    51 Cal.4th 310
    , 322.) Murphy’s complaint alleges that
    she and others “lost a tangible property interest in their accounts and the
    followers they had accumulated,” that she is a “freelance journalist and
    writer who relies on Twitter for her livelihood,” and that “[t]here is no public
    forum comparable to Twitter that would allow Murphy and other users to
    build a widespread following, communicate with a global audience, or support
    themselves in the fields of journalism, politics, or public affairs.” Murphy
    also alleges that without her Twitter account she is “unable to . . . share links
    to her Patreon account (where readers can support her work financially).”
    Murphy does not dispute that under Twitter’s terms of service, however, she
    does not have a property interest in her account or her followers, but only the
    content she creates. And while she alleges generally that she relies on
    Twitter for her livelihood, she does not allege she suffered any actual loss of
    income or financial support.
    Murphy’s theory that Twitter engaged in “unfair” behavior is premised
    on the affirmative claim that inclusion of its liability waiver provisions in its
    terms of service was unconscionable. For the reasons discussed above, we
    reject that theory of liability based on the allegations of Murphy’s
    complaint.11
    Because we conclude Murphy has not stated a claim for
    11
    unconscionability, we need not consider Twitter’s argument that the UCL
    35
    Murphy also asserts that Twitter engaged in fraudulent activity
    “because it held itself out to be a free speech platform” on its website, and in
    advertising, public statements, and its CEO’s testimony before Congress.
    Murphy relies on the same statements and contractual provisions asserted in
    her breach of contract and promissory fraud claims. Murphy claims she and
    others “reasonably assumed that Twitter would allow them to use the forums
    to freely express their opinions on all subjects, without engaging in
    censorship based on their political views and affiliations, so long as they did
    not threaten or harass others.”
    Murphy’s allegations that Twitter’s general declarations of
    commitment to free speech principles cannot support a fraud claim, because
    it is unlikely that members of the public would be deceived by such
    statements. (Kwikset Corp. v. Superior Court, 
    supra,
     51 Cal.4th at p. 326
    [fraud prong requires actual reliance on the allegedly deceptive or misleading
    statements]; Shaeffer v. Califia Farms, LLC (2020) 
    44 Cal.App.5th 1125
    ,
    1140 [whether consumers are likely to be deceived may be resolved on
    demurrer if facts alleged fail to show as matter of law that a reasonable
    consumer would be misled].) No reasonable person could rely on
    proclamations that “[w]e believe in free expression and think every voice has
    the power to impact the world,” that Twitter was the “free speech wing of the
    free speech party,” or that Twitter’s mission “ ‘is to give everyone the power to
    create and share ideas and information instantly without barriers,’ ” as a
    does not permit such claims. (Compare Rubio v. Capital One Bank (9th Cir.
    2010) 
    613 F.3d 1195
    , 1205 [“Under California law, . . . unconscionability is an
    affirmative defense [citation] not a cause of action”] with California Grocers
    Assn. v. Bank of America (1994) 
    22 Cal.App.4th 205
    , 218 [court “assumed”
    UCL “encompass[ed] an affirmative cause of action for unconscionability”]; De
    La Torre v. CashCall, Inc., supra, 5 Cal.5th at p. 980 [citing cases assuming
    UCL encompasses affirmative cause of action for unconscionability].)
    36
    promise that Twitter would not take any action to self-regulate content on its
    platform. (See, e.g., Prager University v. Google LLC (9th Cir. 2020) 
    951 F.3d 991
    , 1000 [“YouTube’s braggadocio about its commitment to free speech
    constitutes opinions” and “[l]ofty but vague statements” about allowing
    individuals to speak freely “are classic, non-actionable opinions or puffery”].)
    As to the provisions in Twitter’s user agreement on which Murphy bases her
    breach of contract and promissory estoppel claims, they, too, are not likely to
    mislead members of the public as a matter of law. No reasonable person
    would rely on a general statement that Twitter would not “actively monitor
    user’s content and will not censor user content, except in limited
    circumstances” to mean it would never restrict content, particularly when the
    same rules offered examples of impermissible conduct, reserved the right to
    change the rules, and gave Twitter “the right to immediately terminate [an]
    account without further notice in the event that, in its judgment, [a user]
    violate[s] [the Twitter Rules] or the Terms of Service.” Because Murphy has
    failed to allege representations by Twitter that were likely to deceive or
    mislead members of the public, her UCL claim fails.
    4. Leave to Amend
    Finally, Murphy contends the trial court erred in sustaining the
    demurrer without leave to amend. She argues the trial court “suggested that
    Murphy’s claims would not be barred by [section] 230(c)(1) if she merely
    sought ‘damages for Twitter’s failure to comply with an alleged contractual or
    quasi-contractual promise.’ ” Murphy argues she should have been granted
    leave to amend to “substitute the relief that the Superior Court deemed
    consistent with [section] 230(c)(1).”
    Contrary to Murphy’s argument, the trial court did not suggest
    Murphy’s claims would be viable; rather, it concluded they were barred by
    37
    section 230 immunity. In a footnote, the trial court wrote that Murphy’s case
    was distinguishable from the facts of Barnes because she “is not seeking
    damages for Twitter’s failure to comply with an alleged contractual or quasi-
    contractual promise, but rather is seeking injunctive relief to compel it to
    restore her and others’ Twitter accounts and to refrain from enforcing its
    Hateful Conduct Policy against her.” Despite Murphy’s conclusory statement
    that she should be given leave to amend, she offered no explanation in the
    trial court or in her briefing on appeal of what specific promise or contractual
    provision Twitter violated that would result in a claim for damages or what
    those damages would be.
    Murphy correctly advances the principle that leave to amend may be
    requested for the first time on appeal. (Code Civ. Proc., § 472c, subd. (a).)
    She fails, however, to explain how she could amend to allege a cognizable
    claim that would survive section 230(c)(1)’s broad immunity or remedy the
    defects in her causes of action discussed above. “ ‘[T]he burden of showing
    that a reasonable possibility exists that amendment can cure the defects
    remains with the plaintiff; neither the trial court nor this court will rewrite a
    complaint. [Citation.] Where the appellant offers no allegations to support
    the possibility of amendment and no legal authority showing the viability of
    new causes of action, there is no basis for finding the trial court abused its
    discretion when it sustained the demurrer without leave to amend.’ ” (Total
    Call Internat., Inc. v. Peerless Ins. Co. (2010) 
    181 Cal.App.4th 161
    , 173.)
    Accordingly, we affirm the judgment of dismissal.
    C. First Amendment
    Because we hold each of Murphy’s claims is barred by section 230(c)(1)
    and Murphy has failed to state a cause of action under California law, we find
    it unnecessary to address Twitter’s argument that Murphy’s claims violate
    38
    the First Amendment. (See Hassell, supra, 5 Cal.5th at p. 534 [courts avoid
    resolving constitutional questions if the issue may be resolved on narrower
    grounds].)
    D. Request for Judicial Notice
    Murphy requested we take judicial notice of (1) a legal ruling from a
    Canadian judicial tribunal; (2) the prepared remarks of Federal Trade
    Commissioner Rohit Chopra for the spring meeting of the American Bar
    Association in 2019; and (3) a settlement agreement in Dept. of Fair Empl. &
    Housing v. AirBnB, Inc. (2017) Nos. 574743-231889, 574743-231624,
    Voluntary Agreement  (as of January 22, 2021).
    The legal ruling from a Canadian tribunal rejecting discrimination
    complaints filed by Jessica Y. is irrelevant to the resolution of this appeal.
    Whether those claims had merit is immaterial to determining whether
    Twitter is entitled to immunity under section 230, or whether Murphy has
    stated a claim for relief. We take judicial notice, however, that the opinion
    refers to one of the individuals Murphy tweeted about, Jessica Y., as a
    “transgender woman.”
    We deny the request for judicial notice of the Federal Trade
    Commissioner’s remarks to the American Bar Association that an overly
    broad reading of section 230 could undermine antitrust enforcement efforts
    because it has no relevance to this appeal, which concerns breach of contract
    and promissory estoppel claims.
    We deny Murphy’s request that we take judicial notice of the California
    Department of Fair Employment and Housing’s settlement with AirBnB, Inc.
    outlining policy concerns regarding interactive computer service providers
    39
    claiming immunity from antidiscrimination laws, which again, are not at
    issue in this appeal.
    III. DISPOSITION
    The judgment is affirmed. Defendants are to recover their costs on
    appeal.
    40
    MARGULIES, J.
    WE CONCUR:
    HUMES, P. J.
    BANKE, J.
    A158214
    Murphy v. Twitter, Inc.
    41
    Trial Court:     San Francisco Superior Court
    Trial Judge:     Ethan P. Schulman
    Counsel:
    Dhillon Law Group, Inc., Harmeet K. Dhillon; Michael Yamamoto LLC and
    Gregory R. Michael for Plaintiff and Appellant.
    Wilmer Cutler Pickering Hale and Dorr LLP, Patrick J. Carome, Ari
    Holtzblatt and Thomas G. Sprankling for Defendants and Respondents.
    Jenner & Block LLP, Luke C. Platzer; National Center for Lesbian Rights,
    Shannon Minter, Asaf Orr; Zeb C. Zankel; Ethan C. Wong; Vaughn E. Olson;
    GLBTQ Advocates & Defenders and Jennifer Levi for National Center for
    Lesbian Rights, LAMBDA Legal Defense and Education Fund, Inc., GLBTQ
    Legal Advocates & Defenders, Transgender Law Center, and The Human
    Rights Campaign as Amicus Curiae on behalf of Defendants and
    Respondents.
    Perkins Coie LLP and James G. Snell for Internet Association, Facebook,
    Inc., Glassdoor, Inc., Google LLC, and Reddit, Inc. as Amicus Curiae on
    behalf of Defendants and Respondents.
    42