keryxa,

PAY THE WRITERS FOR THEIR JOB. DON’T BE A CHEAP PIRATE.

MigratingtoLemmy,

Technically, if one were to disable the JS used for said paywall on a site, they would never see it again. I haven’t personally done this but has anyone tried?

tias,

If the website developer is worth their salt, the article contents won’t be delivered from the web server until the reader has been authorized. So it doesn’t matter how much JS code you disable.

Anders429,

Most sites load no content at all if JS is disabled.

Baleine,
@Baleine@jlai.lu avatar

On a majority of sites all of the page’s content will be present at least for SSO. And you have the added bonus that they don’t ask for cookies etc…

MigratingtoLemmy,

I didn’t ask for JS to be completely disabled, but to disable just enough for the paywall to not crop up

Honytawk,

How would your browser differentiate between the 2 scripts?

MigratingtoLemmy,

There are multiple scripts being used on almost every website. You need to find the one that pops up the paywall. Use NoScript or just unlock origin (I use both) and with some trial and error it’ll work just fine

myliltoehurts,

It would only work if they specifically bundle the functions which cause the paywall in a separate file (it is very unlikely for this to be the case), and also relies on the assumption that the paywall is entirely front-end side, as well as the “default” content to be without paywall (as opposed to the default content being paywalled and requiring JavaScript to load the actual content).

MigratingtoLemmy,

Not a specific file but a domain. And yes, if the processing is done server-side then there is very little we can do about that. Note that I’m not asking one to disable every script on the page, just the specific script for the pop-up/blurring by the paywall

myliltoehurts,

I think I understood what you were suggesting: try disabling the script tags one by one on a website until either we tried them all or we got through the paywall.

My point is that it’s very unlikely to be feasible on most modern websites.

I mention files because very few bits of functionality tend to be inline scripts these days, 90-95% of JavaScript will be loaded from separate .js files the script tags reference.

In modern webapps the JavaScript usually goes through some sort of build system, like webpack, which does a number of things but the important one for this case is that it re-structures how the code is distributed into .js files which are referenced from script tags in the html. This makes it very difficult to explicitly target a specific bit of functionality to be disabled, since the code for paywall is likely loaded from the same file as a hundred other bits of code which make other features work - hence my point that the sites would actively have to go out of their way to make their build process separate their paywall code from other bits of functionality in their codebase, which is probably not something they would do.

On top of this, the same build system may output differently named files after the build since they’re often named after some hashing of the content, so if any code changes in any of the sources the output file name changes as well in an unpredictable way. This would likely be a much smaller issue since I can’t imagine them actively working on all parts of their codebase all the time.

Lastly, if the way a website works is that it loads the content and then some JavaScript hides it behind a paywall then it’s much simpler to either hide the elements in front of it or make the content visible again just by using CSS and HTML - i.e. the way adblockers remove the entire ad element from the pages.

MigratingtoLemmy,

Thanks, I understand your point. Thinking about it, this might not work everywhere

killeronthecorner,
@killeronthecorner@lemmy.world avatar

This is an oversimplification. Paywalls are generally designed to circumvent simplistic “remove popover” approaches. Sites like 12ft.io and paywalls removal extensions use myriad solutions across different site to circumvent both local and server side paywalls.

hedgehog,

Some extra context / clarification from the thread re Vercel: they did warn him starting two weeks ago. They’ve stated he has a line open with customer support to get his other projects restored but that hasn’t happened yet.

lukas,

I think that Vercel wants to drop them as a customer entirely. Vercel could’ve suspended the services related to 12ft.io, but Vercel chose to nuke their account from orbit. I’m unsure why Vercel suspended their domains tho. That’s just asking for trouble with ICANN.

Resolute3542,

Out of curiosity, how is it an issue with ICANN? I know they can complain to them, but what category will this fall under?

lukas,

Depends on whether Vercel refuses to give them the domain transfer code.

BelieveRevolt,

12ft.io was performative useless garbage anyway, if any site can just ask your paywalling bypass site to not bypass their paywall, what is the point of your site

TheGreenGolem,

Exactly. As soon as they bent over to NYT, I stopped using them.

ArtikBanana,

Bypass Paywalls extension for Firefox.
Works better and for more sites in my experience.

VikingHippie,

How do I install it on Firefox android, though?

My phone won’t open xpi files and the only solutions I’ve been able to find is either create a html file in the same folder, which I don’t know how to do on android, or download and install an extension which is ALSO only available as an xpi 🤦

jvrava9,
@jvrava9@lemmy.dbzer0.com avatar

deleted_by_author

  • Loading...
  • PolarisFx,
    @PolarisFx@lemmy.dbzer0.com avatar

    No luck there, couldn’t get it to work on any of the Firefox forks. I eventually got it working in Kiwi Browser

    bernieecclestoned,

    Import custom filter

    Bypass Paywalls Clean

    I think it’s this one

    gitlab.com/…/bypass-paywalls-clean-filters

    1984,
    @1984@beehaw.org avatar
    VikingHippie,

    Looks like it SHOULD work, but when I search for addons, I get quick (far too quick to select any of them m) flashes of gray rectangles where suggestions would normally be and no search results after I execute the search.

    I’m beginning to suspect that the Firefox app is broken 😕

    SirToxicAvenger,

    is that one of those where you need to manually import the extension into the browser?

    VikingHippie,

    Yeah, it would seem so. Can’t do that, though, owing to the aforementioned refusal of my phone to have anything to do with xpi files 😮‍💨

    SirToxicAvenger,

    oh on a phone? hmm… no idea on that one

    jadelord,

    You could instead use the Web Archives extension. Works for most common paywalls.

    mister_newbie,

    I believe you have to use nightly. Try this ghacks.net/…/you-can-now-install-any-add-on-in-fi…

    Throbbing_Banjo,

    That worked! Awesome, thanks, I tried a few other methods last week with no luck

    VikingHippie,

    Yay, that works! Thank you!

    ASeriesOfPoorChoices,

    Use Kiwi browser instead.

    ArtikBanana,

    Well I use Fennec for Android from F-Droid, which has the option of using custom collections for addons.
    I don’t think it’s possible yet on “normal” Firefox other than Nightly.

    VikingHippie,

    Yeah, on the advice of someone else itt, I switched to Nightly and that worked 🙂

    nomadic,

    archive.md gets around way more paywalls. Highly recommend it.

    nicman24,

    disabling js does more

    trash80,

    archive.org is the Internet Archive Wayback Machine, and it works as well. There is also archive.is and archive.ph

    doublejay1999,
    @doublejay1999@lemmy.world avatar

    They seem to have blocked my ip- I get an impassable capture

    trash80,

    that is weird

    Cinner,

    I also get infinitely CAPTCHA blocked on Android trying to connect to archive.md and the other domains. Doesn’t matter if I use Firefox, chrome, Samsung mobile browser.

    doublejay1999,
    @doublejay1999@lemmy.world avatar

    Yep - I have not found a solution. I used it without issue for years which is very weird.

    Cinner,

    Same. Started a few months ago. VPNs don’t work either so it’s some bug with cloudflare I guess, but it only seems to happen on their site.

    vacras,

    It is in fact an issue when using Cloudflare’s DNS. They have written a comment on hackernews explaining the issue. news.ycombinator.com/item?id=19828702

    Cinner, (edited )

    Interesting thanks for sharing that. I don’t use cloudflare as my dns resolver though.

    EDIT: That’s not true. I just double-checked my DNS settings for this network and it WAS using Cloudflare after all! I guess the private DNS settings weren’t working. Let’s hope this fixes the issue because it’s been a major PITA for months. I will update after the cache has had some time to clear. Thank you!!!

    PeachMan,
    @PeachMan@lemmy.world avatar

    Time for 13ft.io

    doublejay1999,
    @doublejay1999@lemmy.world avatar

    Superb

    PinkPanther,

    This is brilliant! Soon, we might have 14ft.io as well.

    VikingHippie,

    I’m gonna hold out for 69ft.io

    overzeetop,
    @overzeetop@lemmy.world avatar

    nice

    TheGreenGolem,

    Nice

    tun,

    I used to use 12ft.io whenever I needed to read a paywalled article.

    Is the “Bypass paywall clean” extension better than 12ft.io?

    Otome-chan,
    Otome-chan avatar

    I use bypass paywalls clean and never see a paywall. so... yes.

    cole,
    @cole@lemdro.id avatar

    it doesn’t work for medium articles in my experience

    Otome-chan,
    Otome-chan avatar

    I haven't had any issues with medium personally. But I have pretty extensive blocking as a whole (ublock, adguard, ghostery, ddg, bypass paywalls clean, canvas blocker, etc)

    ZeroHora,
    @ZeroHora@lemmy.ml avatar

    Why so many blocking extensions?

    Otome-chan,
    Otome-chan avatar

    I'm paranoid.

    satan, (edited )

    It’s just a glorified web scraper, I didn’t know it was this popular. You could build a barebones scraper and output in less than 10 lines with curl in PHP. And 12ftio used to inject its own code into the output, it’s funny how people were Ozzy with that.

    Everyone who ever does web scraping knew serving it on his own public domain was going to be a problem.

    Boy, people are lazy.

    Chozo,
    Chozo avatar

    lmao what an absolutely moronic take.

    blackbirdbiryani,

    Get out and touch grass mate.

    danielton,

    Sure, because everybody who owns a computer, tablet, or smartphone is a web dev. Obviously.

    /s

    GrindingGears,

    Today I learned because I can’t code, I’m useless and lazy.

    The fucking people you come across on the internet…smh

    Catoblepas,
    phoneymouse,

    Please paste the 10 lines here

    doublejay1999,
    @doublejay1999@lemmy.world avatar

    … we’re still waiting

    cashews_best_nut,

    How about 2 lines?

    
    <span style="color:#323232;">$html = file_get_contents('http://paywalledsite.com/news/article/slammed_in_headline/');
    </span><span style="color:#323232;">echo $html;
    </span>
    
    x1gma,

    Yeah, that’s literally the same as 12ft or any other anti paywall tool. I mean hey, it’s just two lines, that’s even 8 less than the original smoothbrain, absolutely easy to use for any end user. Thanks. /s

    Vex_Detrause,

    Do we put that code in the address bar of Firefox? Seriously asking here.

    cashews_best_nut,

    No. It’s something that would have to be hosted on a webserver with PHP. It’s not practical/feasible for end users to do it.

    cashews_best_nut,

    I’m not suggesting people do it or that the OP was right to suggest it. I just wanted to show it more to see if I could remember how it’s done since I’ve not done web dev in about 2 years and am a bit rusty.

    bamboo,

    That’s not even what 12ft.io was. It wasn’t scraping anything, it was just a redirect to the google web cache. Importantly, it was also accessible, something that anyone could use without installing anything.

    empireOfLove, (edited )

    It stopped working on any of the sites I ever bothered to use it on anyway- most of them wisened up to the crawler bypass and simply made a 2 sentence tagline visible to crawlers that hit the SEO terms, with everything else hidden. Soooo nothing of value lost and Capital comes to claim its pie once again.

    dmtalon, (edited )

    It has seemed to work on less and less sites for me recently, to the point that I do not visited it as often as I used to.

    But that tweet does sound like pretty bad news…

    DogMuffins,

    It never ever seemed to work for me.

    noodlejetski,

    by the time it got popular, it was already not working with multiple big sources.

    cheerjoy,
    @cheerjoy@lemmy.world avatar

    The only time I ever used it, they told me they chose not to support that site

  • All
  • Subscribed
  • Moderated
  • Favorites
  • piracy@lemmy.dbzer0.com
  • tacticalgear
  • DreamBathrooms
  • khanakhh
  • mdbf
  • InstantRegret
  • magazineikmin
  • everett
  • cubers
  • rosin
  • Youngstown
  • slotface
  • ngwrru68w68
  • kavyap
  • thenastyranch
  • JUstTest
  • modclub
  • Durango
  • GTA5RPClips
  • cisconetworking
  • osvaldo12
  • ethstaker
  • Leos
  • tester
  • anitta
  • normalnudes
  • provamag3
  • megavids
  • lostlight
  • All magazines