Skip to content
View in the app

A better way to browse. Learn more.

Forge Forums

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

V0idWa1k3r

Members
  • Joined

  • Last visited

Everything posted by V0idWa1k3r

  1. Additionally registering your entities in your client proxy makes zero sense - entities are common, the server must know about them.
  2. you don't have a local/field named dragon, hence the error. This is basic java. Could you please clarify this? I do not think I understand you here. Post your code with this try and the crashreport then. You can't have a null registry name for anything, including entities.
  3. It is used as a registry name for the EntityEntry(ForgeRegistries.ENTITIES) registry. It is simply a registry name just as the one your blocks/items/biomes/potions/whatever have, that's all. You can put anything you want in there, just as you do for your blocks/items/etc.
  4. 1.8 is very outdated and you should update. Generics are a feature of java and many other programming languages. Look it up in a java manual or use a search engine of your choice. I guess it won't apply to 1.8 as 1.8 still was not using generics for TileEntitySpecialRenderer(aka TESR). If you have moved the translate above your rotate call and it is still not working I would suggest using a debugger to find out if Your TESR's renderTileEntityAt is being called at all. The client is aware of the values of the fields.
  5. Well, you should learn the language that you are using to create mods first then. At least it's basics. Block::onBlockActivated indeed gets called multiple times - 1 on the server and 1 on the client. Also I do not think that that signature is right - where is the EnumHand argument? What version of the game are you creating a mod for? You must translate the matrix first, then rotate it.
  6. I suppose that would be because of these lines: A part of my testing framework that I use to debug code from people when I do not see an obvious issue. Well, I am at a loss here then as this identical code works for me. If you don't mind could you please setup a github repo of your project(or a minimal part of it that allows the issue to be reproduced reliably) that can be cloned and debugged locally. I could then debug it with your exact conditions to try to figure out what exactly is wrong. EDIT: even though you've fixed it I am still curious as to what the issue was exactly. Was it a texture issue?
  7. Hm, interesting. Can you please elaborate on the word 'gone' in your scenario? I have debugged your code and had no issues with it - apart from the fact that it is rendered across the entire screen(I've got that fixed in my test) and the UVs point at a relatively small region of the image.
  8. No you didn't. You are still rendering your items at view origin and then translating the matrix. You are still not using generics for your TESR. And are still creating a new ItemStack x2 each frame. This is a segment of code that shows how you are doing it, not where and it certainly does not confirm that the TESR is even being registered. I am not really sure actually, I've never done that kind of a custom IBakedModel before. I assume that as with other custom IBakedModels you would need a custom ICustomModelLoader implementation with a custom IModel implementation that would allow you to have your custom IBakedModel to begin with. You would need to have 2 lists of quads in that model - 1 for your block and 1 for your item and in your getQuads implementation you would return a combination of those lists. While I can get the following to work in my test environment I am sure that there are more efficient ways to do this - (this part is written after my ~30 minute search )and I have found one on this very forum!
  9. Your textureX/Y are of an integer type. So is 1. And so is 256. Dividing an integer 1 by an integer 256 prouces you 0 and well, anything *0 is 0. Why do people create abstractions to begin with? Let's all write our code in assembly! That is sure going to be fun! On a serious note - their BufferBuilder is very flexible - and that is why they did it. As long as you have the format you can easily upload anything you want to it and render it as desired without having to manualy setup GL buffers/arrays/attributes/younameit every time. Heck, you can even have custom formats defined and it will work just fine. Also it allows to easily modify the individual elements of the buffer if needed after they've been uploaded with a single method invocation. Doing it on a raw buffer is... A bit more challenging. And yet a lot of people had issues with MC starting to require OpenGL 3.3. I remember having one myself on a linux machine at work due to the way mesa drivers are done.
  10. Well, look at the error message. It tells you exactly what is wrong. You should look into the entire fml_client_latest.log for more verbose output and figure out how to fix that. Do you know what generics are? Using them here will save you from a lot of redundant casts. As to the issue: Rendering an item with a block does not require a TESR and can be achieved with a custom IBakedModel. You are currently rendering at origin point(0,0,0). Well, view-space origin that is. Translate the matrix by x,y,z passed to you as parameters. Make sure that the item rendered does not end up inside a block where you can't see it before saying "it doesn't work either" It is probably not a good idea to create a new ItemStack object x2 each frame. Especially considering that they are identical by content. If you need to render a constant ItemStack like that have it in a constant field. You haven't shown how/where you are registering your TESR.
  11. Position it in the middle of the screen then. It would be screenWidth/2 - yourWidth/2, screenHeight/2 - yourHeight/2. It is preferred because it is conventional. I mean in theory you can use java's ImageIO to load images from an InputStream, upload their pixel data to a buffer, setup a GLtexture and link the buffer to the texture every time you need a texture but you are not doing that, are you? Instead you use TextureManager::bindTexture. Same goes for BufferBuilder. And as I've said: As to how it worksL it basically has an internal buffer it fills with data you pass to it. The offsets, strides and elements/vertex are controlled by the VertexFormat specified. Once you invoke draw the buffer is passed to OpenGL and rendered. It is somewhat of a complex yet very flexible wrapper around GL 3.0+ rendering. If you are familiar with GL4.5 you should be familiar with this aswell. As a side bonus it allows for "modern" shaders to be used(and MC does that in some cases). Let's look at Gui::drawTexturedModalRect. Specifically at it signature: public void drawTexturedModalRect(int x, int y, int textureX, int textureY, int width, int height) The parameters I've marked as bold are to be processed into UVs in the method. textureX/Y are the start, and textureX/Y + width/height are the end. How are they processed? Well, let's look into that method. You will quickly notice that the results of manipulations with these values are multiplied by 0.00390625F. What is this magical number? It is 1/256. MC assumes the height/width of a GUI icon to be 256x256 and does calculations based on that assumption. If the texture is bigger in size(say via a resourcepack) it actually does not matter as the range of [0-1] is effectively a percentage and if the code is based around a 256x256 assumption(the variables passed to this method, that is) the end-result percentages are still going to be correct regardless of the texture width/height.
  12. You already are. The NBTTagCompound you are writing to the file is the 'data' you are writing. As to the specifications Fill in the NBTTagCompound as you wish then. Do you know how NBT tags work? If you don't I suggest looking into vanilla classes that serialize/deserialize NBT, tileentities for example. It looks to me that you are writing some generic data into NBT. Is there a reason you are using NBT specifically?
  13. Your file end-name will be storage.dat.dat as you add the extension in the constructor. Use the debugger to find out what's wrong. Put a breakpoint in your write method. I would say that if the file is not even being created then the issue most likely is within the fileLocation field so inspect it.
  14. There are safeWrite and write methods that write the NBT without compression.
  15. CompressedStreamTools, yes. A file can be anything you want, you can set anything as the "file name". All you need to do is write the NBT using methods from CompressedStreamTools into your FileOutputStream.
  16. UVs must be within a range of [0-1] and you are passing a range of [0-16] x [0-51]. That won't work well. By default all MC textures have their wrap S/T set to repeat, and that's why you see "thousands of little textures" - it is actually your texture repeated 16 times on the x axis and 51 times on the y axis. Well, where are you expecting to see your quad? We can't tell what is wrong with the UI positioning just by looking at the code - although I can take a guess: you should subtract your position offsets after you've found the center of the screen. There is absolutely nothing stopping you from using static methods in the GUI class. Or copying the method for that matter. I still suggest using BufferBuilder/VertexBuffer rather than OpenGL directly. There is a reason MC switched to it entirely. And well, some people might assume that all drawing is indeed done using methods from that class and abuse that.
  17. Don't create a new EntityItem each frame. Especially because you are using it only to re-route to the code that renders the ItemStack in pretty much the same way you've commented out. As you've said the lightmap coordinates are incorrect. Set them manually with OpenGlHelper.setLightmapTextureCoords. The first parameter is the texture unit(OpenGlHelper.lightmapTexUnit), second and third are your lightmap coordinates. Do not forget to reset them back to what they were after you are done - their current values are stored at OpenGlHelper.lastBrightnessX/Y
  18. You need to potentially override TileEntity::getRenderBoundingBox and TileEntity::getMaxRenderDistanceSquared. The first method dictates whether to render your TE if it is's origin is not within camera's viewport, the second one dictates the distance to stop rendering your TE at. Mind the "squared" in the name of the method. By default it returns 4096 that translates to 64 blocks.
  19. Well, you know that this feature exists in vanilla and you even named the object that has that feature. Why ask on how to do it when you can look how vanilla does that?
  20. Lightmaps are not 3d. The first argument is a texture unit target to apply the coordinates to.
  21. You can use Store the previous coordinates in locals(the current coordinates are stored in OpenGlHelper.lastBrightnessX and OpenGlHelper.lastBrightnessY), set the coordinates to whatever you want, render your thing and reset them back. The target parameter for OpenGlHelper.setLightmapTextureCoords would be OpenGlHelper.lightmapTexUnit
  22. The ordering matters. If your format is POSITION_TEX_LMAP_COLOR then you must specify a position, a set of UVs, a lightmap and color in that order. Also as this format uses UVs you shouldn't disable the texture. You should also provide a set of UVs that makes sense, not fill it with zeroes. Additionally I believe that lightmap of 255 255 is actually pretty dark. I use 240 240 when I want full brightness. As you've pretty much got it done here is a code snippet that renders your quad: wr.pos(0, 0, 0).tex(0, 0).lightmap(240, 240).color(r, b, g, 255).endVertex(); wr.pos(1, 0, 0).tex(1, 0).lightmap(240, 240).color(r, b, g, 255).endVertex(); wr.pos(1, 1, 0).tex(1, 1).lightmap(240, 240).color(r, b, g, 255).endVertex(); wr.pos(0, 1, 0).tex(0, 1).lightmap(240, 240).color(r, b, g, 255).endVertex(); Note that it rendrs whatever texture was bound last, and it is a small face that only renders when you look at it from a specific side. This is most likely not what you want to achieve but it is a starting point, I suppose
  23. Your format is POSITION_COLOR. It doesn't specify a lightmap.
  24. Have you heard of gradients? How do you think they work? The color is a property of a vertex, aka a point. If you define a point with a red color and a point with a green color the color of the line drawn between those two points will be linearly interpolated. You put it after you define a vertex. When you call pos you define a position element of a vertex. When you call color you define another element of a vertex. wr.pos(0, 0, 0).color(100, 100, 100, 255).endVertex(); is a fully defined position + color vertex. You would either select a vertex format that uses lightmap as one of it's vertex elements or manualy set it with OpenGlHelper.setLightmapTextureCoords. If you use the later do not forget to reset it back. Additionally if your vertex format does not include UVs(texture) disable it before rendering(GlStateManager.disableTexture2D) and enable it after you are done.
  25. color is a property of a vertex, you can't just put it into a buffer without any context and expect it to work. After defining a vertex you must end it's definition with VertexBuffer::endVertex. There is no point in applying color with GlStateManager as your vertex format already specifies a color. If you are changing states of GlStateManager - reset them after you are done. If you are disabling the lighting before drawing enable it after you are done. You have already translated everything by xyz with GlStateManager.translate. Putting xyz as origin in your vertices is going to translate them again, causing issues. If you want your rendering 'fullbright' disabling lighting is not going to be enough, you will need to specify lightmap coordinates. That is really up tou you. I do not think there are any tutorials out there that would describe how to render "a rift". You would want to play with rendering untill you achieve the effect you are after.

Important Information

By using this site, you agree to our Terms of Use.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.