-
Posts
5160 -
Joined
-
Last visited
-
Days Won
76
Everything posted by Choonster
-
Forge 1.19 Keep Durability when upgrading Tool in Crafting Table
Choonster replied to BraunBerry's topic in Modder Support
It's entirely possible for crafting table recipes to customise their output (including setting NBT/damage) by overriding IRecipe#assemble. I have a recipe that does a similar thing for armour here. -
ServerChatEvent was overhauled in the 1.19.1 update to support the secure chat system, it's probably not going to be possible to fix this without updating to 1.19.1 or 1.19.2.
-
If you look at the loot table for cows, you'll see that it checks whether the entity is on fire to determine whether to drop steak; not whether the tool has Fire Aspect.
-
I never found a way to ignore the errors; but I eventually replaced HWYLA with Jade, which doesn't have this issue.
-
[1.18.1] Workaround for multiple inheritance
Choonster replied to SoLegendary's topic in Modder Support
Instead of using an interface and extending the vanilla entity classes, could you use a capability? You could have a base class with the shared logic and then entity-specific implementations attached to different entity classes. If you need to do stuff every tick, you'd need to use LivingUpdateEvent. -
ObfuscationReflectionHelper methods always take SRG names, even in the development environment. In development, the SRG name is automatically remapped to the corresponding MCP name.
-
The second argument of withExistingParent is the path to a model file to use as a parent, not a texture. For basic block items, the model normally uses the block model as the parent, rather than specifying individual textures. I use this helper method in my BlockStateProvider implementation to generate block item models that simply extend the block model. You can see an example of this here. On a side note, the DeferredRegister instance should always be created in the same class as it's used in; don't put the DeferredRegister and RegistryObject fields in separate classes.
-
[1.16.5 official mapping] Specifying potion in recipe builder (Solved)
Choonster replied to reasure's topic in Modder Support
NBTIngredient doesn't have an of method itself, you're actually calling Ingredient.of. You need to create an instance of NBTIngredient directly (or a class that extends it, since the constructor is protected). -
[1.16.5] Properly using DistExecutor with arguments
Choonster replied to Choonster's topic in Modder Support
Thanks, that makes sense. -
[1.16.5] Properly using DistExecutor with arguments
Choonster replied to Choonster's topic in Modder Support
Thanks, I think that makes sense. I've tried to follow this advice and clean up all my DistExecutor code in this commit, does this look correct? -
I have a packet that's sent to the client to open a GUI, which I'm using DistExecutor to do. The packet's handler method does the following: DistExecutor.safeRunWhenOn(Dist.CLIENT, () -> ClientOnlyNetworkMethods.openClientScreen(message)) ClientOnlyNetworkMethods.openClientScreen currently looks like this: public static DistExecutor.SafeRunnable openClientScreen(final OpenClientScreenMessage message) { return new DistExecutor.SafeRunnable() { @Override public void run() { ClientScreenManager.openScreen(message.getId(), message.getAdditionalData(), Minecraft.getInstance()); } }; } ClientScreenManager is a client-only class that handles opening the GUI. As you can see from the code, I need to pass arguments from the packet to the client-only method; which rules out using a method reference as the SafeRunnable implementation. When I replace the anonymous class implementation of SafeRunnable in ClientOnlyNetworkMethods.openClientScreen with a lambda, DistExecutor.validateSafeReferent throws an "Unsafe Referent usage found in safe referent method" exception. From what I can see, using any non-lambda implementation of SafeReferent simply bypasses the safety checks in validateSafeReferent but doesn't necessarily mean that the code is safe. The current code with the anonymous class does seem to work on the dedicated server, but is this the correct way to use DistExecutor; or is there a better way to do it?
-
Allow IContainerListeners to opt-in to receiving all slot changes
Choonster replied to Choonster's topic in Suggestions
Yes, that probably would have been a useful feature. It's a shame that the author didn't have time to complete it. -
Allow IContainerListeners to opt-in to receiving all slot changes
Choonster replied to Choonster's topic in Suggestions
Part of the idea with my system was to allow syncing capabilities attached to arbitrary items, not just items that know about their capabilities. What would you recommend for capabilities attached to items from Vanilla or another mod? -
[1.16] syncronising itemstack capabilities for any item
Choonster replied to Tavi007's topic in Modder Support
I've had a brief look at this and can't see any easy way to work around it, so I've created a suggestion thread for a change here: I'm not sure if it will go anywhere. -
I have a system for syncing item capability data that uses ICapabilityListener, as explained here: I discovered in that thread that this pull request for 1.12.2 back in 2017 partially broke my system by changing Container#detectAndSendChanges to only call IContainerListener#sendSlotContents if a slot's Item, count or share tag has changed; which often won't be the case for capability-only updates. The change does make sense for Vanilla IContainerListener implementations to reduce unnecessary network traffic, but would it be possible to allow modded IContainerListeners to opt-in to having sendSlotContents called even if the Items, counts and share tags are equal?
-
[1.16] syncronising itemstack capabilities for any item
Choonster replied to Tavi007's topic in Modder Support
It looks like Forge patches Container#detectAndSendChanges to only call IContainerListener#sendSlotContents if a slot's Item, count or share tag has changed; which often won't be the case for capability-only updates. I may need to re-evaluate the IContainerListener system to see if there's any way around this. This change was actually introduced in August 2017 for 1.12.2, six months after I created my system. I thought it was working more recently than that, but I must not have tested it properly. -
[1.16] syncronising itemstack capabilities for any item
Choonster replied to Tavi007's topic in Modder Support
With my system, each capability type that needs to be synced to the client has several sync-related classes: A single update network message (extending UpdateContainerCapabilityMessage) that syncs the capability data for a single slot of a Container. A bulk update network message (extending BulkUpdateContainerCapabilityMessage) that syncs the capability data for all slots of a Container. A "functions" class containing static methods used by both the single and bulk update messages. A container listener class (extending CapabilityContainerListener) that sends the single/bulk update messages when the Container's contents change. A factory function for the container listener is registered with CapabilityContainerListenerManager.registerListenerFactory at startup so that when a player opens a Container, a new listener can be created and added to it. The network messages have the concept of a "data" class, which is a simple POJO (or even a primitive type like int or long) containing only the data that needs to be synced from the server to the client. The base classes for the messages handle the functionality that's common to all capability types, the message classes for each capability just need to provide functions to do the following: On the server: Convert an instance of the capability handler (e.g. IFluidHandlerItem for a fluid tank) to a data object Encode (write) the data object to the packet buffer On the client: Decode (read) the data object from the packet buffer Apply the data from the data object to the capability handler instance These functions could be defined anywhere (they could even be lambdas passed directly to the base class methods), but I keep them as static methods in a "functions" class so they can be shared between the single and bulk messages. The system might be a bit over-engineered, but it means that I can easily add syncing for a new item capability without having to rewrite all the common syncing logic. There are several implementations of this in TestMod3 that you could use as examples: ILastUseTime, which tracks the time at which an item was last used. This is a simple implementation that syncs a single long value to the client, so it uses Long as its data class. Single update message Bulk update message Functions class Container listener Container listener registration (called during FMLCommonSetupEvent) IFluidHandlerItem, Forge's fluid tank capability. This is a slightly more complex implementation that syncs a FluidStack (the tank contents) and an int (the tank capacity), so it uses FluidTankSnapshot as its data class. Single update message Bulk update message Functions class Container listener (only syncs data for my own fluid tank item, to avoid conflicts with other mods' fluid handler items) Container listener registration (called during FMLCommonSetupEvent) -
[1.16.3] Registering LootFunctionType and LootConditionType
Choonster replied to The_Wabbit's topic in Modder Support
The OP is asking about Vanilla loot conditions and functions, not global loot modifiers. I'm not 100% sure if it's the correct time to register them, but I do it on the main thread after FMLCommonSetupEvent (i.e. inside a lambda passed to event.enqueueWork). I use the Vanilla registries, just like in your example. I'm not sure if it's necessary to register non-Forge registry entries at any specific time like it is with Forge registry entries. This is my FMLCommonSetup handler, this is my LootConditionType registration and this is my LootFunctionType registration. -
[Solved] [1.16.3] Registering Biome with modded Features
Choonster replied to Choonster's topic in Modder Support
Thanks, I saw the Supplier overloads but didn't think to use them for lazy/deferred references. I realised after posting that my issue was actually with a SurfaceBuilder rather than a Feature (I could have moved the Feature registration since it wasn't being used in the Biome), but the same solution applies: use the Supplier overload instead of trying to pass the ConfiguredSurfaceBuilder directly. For future reference, I fixed the original issue and a few related worldgen registration issues with this commit. I decided not to go with the JSON route since my biome makes use of a lot of Vanilla features/structures (the same ones as the Vanilla Desert biome) and I didn't want to write all of that out by hand. I would have liked a data generator approach like blockstates, models, loot tables, etc.; but the Vanilla BiomeProvider is only designed to generate "report" files from already-registered Biomes. I did end up adding my own version of BiomeProvider that only generates files for my own mod's biomes in this commit, the generated JSON file is 1,968 lines. -
I'm trying to register a Biome with a Feature from my mod, but I'm having difficulty because Biome registration happens before Feature registration. This is the relevant registration code: Feature (DeferredRegister) Biome (DeferredRegister) ConfiguredFeature (Vanilla registry, called from here in RegistryEvent.Register<Biome> with HIGH priority to run before Biome DeferredRegister). The game crashes on startup because the ConfiguredFeature registration runs before the Feature has been registered: java.lang.NullPointerException: Registry Object not present: testmod3:banner at java.util.Objects.requireNonNull(Objects.java:290) at net.minecraftforge.fml.RegistryObject.get(RegistryObject.java:120) at choonster.testmod3.init.ModConfiguredFeatures.register(ModConfiguredFeatures.java:25) at choonster.testmod3.TestMod3.registerBiomes(TestMod3.java:57) What's the best way to work around this? Should I create the Features in RegistryEvent.Register<Biome> and then register them in RegistryEvent.Register<Feature>?
-
I've reported this to Hwyla here.
-
I've cross-posted this to StackOverflow here, hopefully someone over there will have an answer.
-
That lets it continue to other tasks after javadoc, but the Javadoc generation still fails.